852 resultados para termination of contract mining concession
Resumo:
El mercado de outsourcing ha estado creciendo en los últimos años y se prevé que lo siga haciendo en los próximos, pero este crecimiento ha estado limitado por el fracaso de muchos proyectos que, en algunos casos, han llevado a las organizaciones a asumir de nuevo esos servicios (insourcing). Estos fracasos se han debido en gran parte a los problemas con los proveedores: falta de experiencia, de capacidades para asumir los proyectos, dificultad en la comunicación,… A diferencia de lo que ocurre en otras disciplinas, no existe una metodología que ayude, tanto a los clientes como a los proveedores de servicios de outsourcing de TI, a gobernar y gestionar sus proyectos y conseguir los resultados buscados. En los últimos años han aparecido, al mismo tiempo que la expansión del outsourcing, algunos modelos y marcos de buenas prácticas para la gestión de los proyectos de outsourcing, pero generalmente sólo cubren algunos aspectos de la gestión. No se los puede considerar metodologías, porque no definen roles, responsabilidades ni entregables. Por lo general, son el resultado de la experiencia en la gestión de otros tipos de proyectos. Hay que considerar también que, excepto eSCM-SP, que es un modelo de buenas prácticas para mejorar la capacidad en la provisión de servicios, están todos orientados al cliente. El objetivo de esta tesis es, por un lado, demostrar la necesidad de contar con una metodología que guíe a los proveedores durante todo el ciclo de vida un proyecto de outsourcing y, por otro, proponer una metodología que contemple desde la fase inicial de la búsqueda de oportunidades de negocio, evaluación de las propuestas RFP, la decisión de hacer una oferta o no para la prestación de servicios, la participación en la due diligence, la firma del contrato, la transición y la entrega de servicios, hasta la finalización del contrato. La metodología se ha organizado en base a un ciclo de vida del outsourcing de cinco etapas, definiendo para cada una de ellas los roles que participan y las responsabilidades que deberán asumir, las actividades a realizar y los entregables que se deberán generar, y que servirán de elementos de control tanto para la gestión del proyecto como para la provisión del servicio. La validación de la metodología se ha realizado aplicándola en proyectos de provisión de servicios de TI de una mediana empresa española y comparando los resultados obtenidos con los conseguidos en proyectos anteriores. ABSTRACT The outsourcing market has been growing in recent years and it is expected to keep doing so in the coming years, but this growth has been limited by the failure of many projects that, in some cases, has led organizations to take back those services (insourcing). These failures have been due to a major degree to problems with providers: lack of experience and capacity to take on the projects, and difficulties of communication. Unlike what happens in other disciplines, there is no methodology for helping both customers and providers of outsourcing services. In recent years, some good practice frameworks have also appeared at the same time as the expansion of outsourcing. They are not methodologies because they have not defined any roles, responsibilities and deliverables. These frameworks aim to help organizations to be successful at managing and governing outsourcing projects. They are usually the result of their experience in managing other kinds of projects. In consequence, it is not appropriate to name them "methodologies" for managing outsourcing projects and much less "standards". It is also important to note that all existing good practice frameworks, except eSCM-SP, are client-oriented. The aim of this thesis is to state the need to propose a methodology that guides providers throughout the whole outsourcing life cycle and facilitates the provision of quality services and their management, and the proposal of a methodology in which the stages, activities, deliverables, roles and responsibilities are clearly defined. The proposed methodology cover all the stages of the outsourcing life cycle, from the early stage of searching for business opportunities, evaluation of the RFP proposals, the decision to bid or not to bid for the service provision, participation in the due diligence if necessary, the signing of the contract, the transition and delivery of service to the termination of the contract. For each activity, roles, responsibilities and deliverables have been defined. The validation of the methodology has been done by applying it in the provision of some outsourcing projects carried out by a Spanish IT medium company and comparing the results with those obtained in previous projects.
Resumo:
Este proyecto tiene por objeto el estudio de la viabilidad, tanto económica como técnica, de la explotación de una cantera de calizas pertenecientes a la formación geológica Calizas del Páramo. El yacimiento explotado es de tipo superficial, con una profundidad media entre 30 y 35 m. La concesión de explotación se localiza en la subcomarca de la Alcarria de Alcalá, al sureste de la Comunidad de Madrid. La caliza extraída es utilizada para fabricación de cemento, micronizados y pinturas, utilizándose el subproducto como árido en diferentes materiales de la construcción, como fabricación de hormigones, aglomerados asfálticos, sub-base para carreteras, caminos y otros usos similares. Desde el punto de vista técnico, la explotación se realiza a cielo abierto por el método de banqueo descendente en profundidad y restauración progresiva, mediante arranque por perforación y voladura, y posterior carga y transporte de materiales por medios mecánicos. El Proyecto de Explotación plantea la extracción de un volumen total de material del orden de 32,85 Mt. Atendiendo a la cifra de reservas explotables y al ritmo de producción anual previsto, el período de explotación contemplado en este Proyecto es del orden de 30 años desde la situación actual. El estudio realizado de los indicadores económicos nos muestra que el proyecto es rentable, con todo ello, parece concluirse que, tanto técnica como económicamente, la explotación es viable. ABSTRACT This project aims to study the economic and technical feasibility for the operation of a limestone quarry belonging to the geological formation of the Calizas del Páramo. The reservoir exploited is superficial type, with an average depth between 30 and 35 m. The mining concession is located in the subregion of the Alcarria de Alcalá, southeast of Madrid. The extracted limestone is used for manufacturing cement, micronized and paintings, using the product as aggregate in different construction materials, such as manufacture of concrete, asphalt mixes, sub-base for roads, paths and other similar uses. From the technical point of view, the operation will be held openpit by the method of depth descending benching and progressive restoration. The operation begins by drilling and blasting, continuing with loading and transport of materials by mechanical methods. The Exploitation Project raises the extraction of a total volume of material of 32.85 Mt approximately Considering the number of exploitable reserves and expected annual production rate, the operating period referred to in this project is of the order of 30 years from the current situation. The study of economic indicators shows that the project is profitable, yet it can be concluded that, both technically and economically, the exploitation of mineral resources is viable.
Resumo:
Advanced composite materials are increasingly used in the strengthening of reinforced concrete (RC) structures. The use of externally bonded strips made of fibre-reinforced plastics (FRP) as strengthening method has gained widespread acceptance in recent years since it has many advantages over the traditional techniques. However, unfortunately, this strengthening method is often associated with a brittle and sudden failure caused by some form of FRP bond failure, originated at the termination of the FRP material or at intermediate areas in the vicinity of flexural cracks in the RC beam. Up to date, little effort in the early prediction of the debonding in its initial instants even though this effect is not noticeable by simple visual observation. An early detection of this phenomenon might help in taking actions to prevent future catastrophes. Fibre-optic Bragg grating (FBG) sensors are able to measure strains locally with high resolution and accuracy. Furthermore, as their physical size is extremely small compared with other strain measuring components, it enables to be embedded at the concrete-FRP interface for determining the strain distribution without influencing the mechanical properties of the host materials. This paper shows the development of a debonding identification methodology based on strains experimentally measured. For, it a simplified model is implemented to simulate the behaviour of FRP-strengthened reinforced concrete beams. This model is taken as a basis to. develop an model updating procedure able to detect minor debonding at the concrete-FRP interface from experimental strains obtained by using FBG sensors embedded at the interface
Resumo:
La predicción del valor de las acciones en la bolsa de valores ha sido un tema importante en el campo de inversiones, que por varios años ha atraído tanto a académicos como a inversionistas. Esto supone que la información disponible en el pasado de la compañía que cotiza en bolsa tiene alguna implicación en el futuro del valor de la misma. Este trabajo está enfocado en ayudar a un persona u organismo que decida invertir en la bolsa de valores a través de gestión de compra o venta de acciones de una compañía a tomar decisiones respecto al tiempo de comprar o vender basado en el conocimiento obtenido de los valores históricos de las acciones de una compañía en la bolsa de valores. Esta decisión será inferida a partir de un modelo de regresión múltiple que es una de las técnicas de datamining. Para llevar conseguir esto se emplea una metodología conocida como CRISP-DM aplicada a los datos históricos de la compañía con mayor valor actual del NASDAQ.---ABSTRACT---The prediction of the value of shares in the stock market has been a major issue in the field of investments, which for several years has attracted both academics and investors. This means that the information available in the company last traded have any involvement in the future of the value of it. This work is focused on helping an investor decides to invest in the stock market through management buy or sell shares of a company to make decisions with respect to time to buy or sell based on the knowledge gained from the historic values of the shares of a company in the stock market. This decision will be inferred from a multiple regression model which is one of the techniques of data mining. To get this out a methodology known as CRISP-DM applied to historical data of the company with the highest current value of NASDAQ is used.
Resumo:
La diabetes mellitus es un trastorno en la metabolización de los carbohidratos, caracterizado por la nula o insuficiente segregación de insulina (hormona producida por el páncreas), como resultado del mal funcionamiento de la parte endocrina del páncreas, o de una creciente resistencia del organismo a esta hormona. Esto implica, que tras el proceso digestivo, los alimentos que ingerimos se transforman en otros compuestos químicos más pequeños mediante los tejidos exocrinos. La ausencia o poca efectividad de esta hormona polipéptida, no permite metabolizar los carbohidratos ingeridos provocando dos consecuencias: Aumento de la concentración de glucosa en sangre, ya que las células no pueden metabolizarla; consumo de ácidos grasos mediante el hígado, liberando cuerpos cetónicos para aportar la energía a las células. Esta situación expone al enfermo crónico, a una concentración de glucosa en sangre muy elevada, denominado hiperglucemia, la cual puede producir a medio o largo múltiples problemas médicos: oftalmológicos, renales, cardiovasculares, cerebrovasculares, neurológicos… La diabetes representa un gran problema de salud pública y es la enfermedad más común en los países desarrollados por varios factores como la obesidad, la vida sedentaria, que facilitan la aparición de esta enfermedad. Mediante el presente proyecto trabajaremos con los datos de experimentación clínica de pacientes con diabetes de tipo 1, enfermedad autoinmune en la que son destruidas las células beta del páncreas (productoras de insulina) resultando necesaria la administración de insulina exógena. Dicho esto, el paciente con diabetes tipo 1 deberá seguir un tratamiento con insulina administrada por la vía subcutánea, adaptado a sus necesidades metabólicas y a sus hábitos de vida. Para abordar esta situación de regulación del control metabólico del enfermo, mediante una terapia de insulina, no serviremos del proyecto “Páncreas Endocrino Artificial” (PEA), el cual consta de una bomba de infusión de insulina, un sensor continuo de glucosa, y un algoritmo de control en lazo cerrado. El objetivo principal del PEA es aportar al paciente precisión, eficacia y seguridad en cuanto a la normalización del control glucémico y reducción del riesgo de hipoglucemias. El PEA se instala mediante vía subcutánea, por lo que, el retardo introducido por la acción de la insulina, el retardo de la medida de glucosa, así como los errores introducidos por los sensores continuos de glucosa cuando, se descalibran dificultando el empleo de un algoritmo de control. Llegados a este punto debemos modelar la glucosa del paciente mediante sistemas predictivos. Un modelo, es todo aquel elemento que nos permita predecir el comportamiento de un sistema mediante la introducción de variables de entrada. De este modo lo que conseguimos, es una predicción de los estados futuros en los que se puede encontrar la glucosa del paciente, sirviéndonos de variables de entrada de insulina, ingesta y glucosa ya conocidas, por ser las sucedidas con anterioridad en el tiempo. Cuando empleamos el predictor de glucosa, utilizando parámetros obtenidos en tiempo real, el controlador es capaz de indicar el nivel futuro de la glucosa para la toma de decisones del controlador CL. Los predictores que se están empleando actualmente en el PEA no están funcionando correctamente por la cantidad de información y variables que debe de manejar. Data Mining, también referenciado como Descubrimiento del Conocimiento en Bases de Datos (Knowledge Discovery in Databases o KDD), ha sido definida como el proceso de extracción no trivial de información implícita, previamente desconocida y potencialmente útil. Todo ello, sirviéndonos las siguientes fases del proceso de extracción del conocimiento: selección de datos, pre-procesado, transformación, minería de datos, interpretación de los resultados, evaluación y obtención del conocimiento. Con todo este proceso buscamos generar un único modelo insulina glucosa que se ajuste de forma individual a cada paciente y sea capaz, al mismo tiempo, de predecir los estados futuros glucosa con cálculos en tiempo real, a través de unos parámetros introducidos. Este trabajo busca extraer la información contenida en una base de datos de pacientes diabéticos tipo 1 obtenidos a partir de la experimentación clínica. Para ello emplearemos técnicas de Data Mining. Para la consecución del objetivo implícito a este proyecto hemos procedido a implementar una interfaz gráfica que nos guía a través del proceso del KDD (con información gráfica y estadística) de cada punto del proceso. En lo que respecta a la parte de la minería de datos, nos hemos servido de la denominada herramienta de WEKA, en la que a través de Java controlamos todas sus funciones, para implementarlas por medio del programa creado. Otorgando finalmente, una mayor potencialidad al proyecto con la posibilidad de implementar el servicio de los dispositivos Android por la potencial capacidad de portar el código. Mediante estos dispositivos y lo expuesto en el proyecto se podrían implementar o incluso crear nuevas aplicaciones novedosas y muy útiles para este campo. Como conclusión del proyecto, y tras un exhaustivo análisis de los resultados obtenidos, podemos apreciar como logramos obtener el modelo insulina-glucosa de cada paciente. ABSTRACT. The diabetes mellitus is a metabolic disorder, characterized by the low or none insulin production (a hormone produced by the pancreas), as a result of the malfunctioning of the endocrine pancreas part or by an increasing resistance of the organism to this hormone. This implies that, after the digestive process, the food we consume is transformed into smaller chemical compounds, through the exocrine tissues. The absence or limited effectiveness of this polypeptide hormone, does not allow to metabolize the ingested carbohydrates provoking two consequences: Increase of the glucose concentration in blood, as the cells are unable to metabolize it; fatty acid intake through the liver, releasing ketone bodies to provide energy to the cells. This situation exposes the chronic patient to high blood glucose levels, named hyperglycemia, which may cause in the medium or long term multiple medical problems: ophthalmological, renal, cardiovascular, cerebrum-vascular, neurological … The diabetes represents a great public health problem and is the most common disease in the developed countries, by several factors such as the obesity or sedentary life, which facilitate the appearance of this disease. Through this project we will work with clinical experimentation data of patients with diabetes of type 1, autoimmune disease in which beta cells of the pancreas (producers of insulin) are destroyed resulting necessary the exogenous insulin administration. That said, the patient with diabetes type 1 will have to follow a treatment with insulin, administered by the subcutaneous route, adapted to his metabolic needs and to his life habits. To deal with this situation of metabolic control regulation of the patient, through an insulin therapy, we shall be using the “Endocrine Artificial Pancreas " (PEA), which consists of a bomb of insulin infusion, a constant glucose sensor, and a control algorithm in closed bow. The principal aim of the PEA is providing the patient precision, efficiency and safety regarding the normalization of the glycemic control and hypoglycemia risk reduction". The PEA establishes through subcutaneous route, consequently, the delay introduced by the insulin action, the delay of the glucose measure, as well as the mistakes introduced by the constant glucose sensors when, decalibrate, impede the employment of an algorithm of control. At this stage we must shape the patient glucose levels through predictive systems. A model is all that element or set of elements which will allow us to predict the behavior of a system by introducing input variables. Thus what we obtain, is a prediction of the future stages in which it is possible to find the patient glucose level, being served of input insulin, ingestion and glucose variables already known, for being the ones happened previously in the time. When we use the glucose predictor, using obtained real time parameters, the controller is capable of indicating the future level of the glucose for the decision capture CL controller. The predictors that are being used nowadays in the PEA are not working correctly for the amount of information and variables that it need to handle. Data Mining, also indexed as Knowledge Discovery in Databases or KDD, has been defined as the not trivial extraction process of implicit information, previously unknown and potentially useful. All this, using the following phases of the knowledge extraction process: selection of information, pre- processing, transformation, data mining, results interpretation, evaluation and knowledge acquisition. With all this process we seek to generate the unique insulin glucose model that adjusts individually and in a personalized way for each patient form and being capable, at the same time, of predicting the future conditions with real time calculations, across few input parameters. This project of end of grade seeks to extract the information contained in a database of type 1 diabetics patients, obtained from clinical experimentation. For it, we will use technologies of Data Mining. For the attainment of the aim implicit to this project we have proceeded to implement a graphical interface that will guide us across the process of the KDD (with graphical and statistical information) of every point of the process. Regarding the data mining part, we have been served by a tool called WEKA's tool called, in which across Java, we control all of its functions to implement them by means of the created program. Finally granting a higher potential to the project with the possibility of implementing the service for Android devices, porting the code. Through these devices and what has been exposed in the project they might help or even create new and very useful applications for this field. As a conclusion of the project, and after an exhaustive analysis of the obtained results, we can show how we achieve to obtain the insulin–glucose model for each patient.
Resumo:
La gran cantidad de datos que se registran diariamente en los sistemas de base de datos de las organizaciones ha generado la necesidad de analizarla. Sin embargo, se enfrentan a la complejidad de procesar enormes volúmenes de datos a través de métodos tradicionales de análisis. Además, dentro de un contexto globalizado y competitivo las organizaciones se mantienen en la búsqueda constante de mejorar sus procesos, para lo cual requieren herramientas que les permitan tomar mejores decisiones. Esto implica estar mejor informado y conocer su historia digital para describir sus procesos y poder anticipar (predecir) eventos no previstos. Estos nuevos requerimientos de análisis de datos ha motivado el desarrollo creciente de proyectos de minería de datos. El proceso de minería de datos busca obtener desde un conjunto masivo de datos, modelos que permitan describir los datos o predecir nuevas instancias en el conjunto. Implica etapas de: preparación de los datos, procesamiento parcial o totalmente automatizado para identificar modelos en los datos, para luego obtener como salida patrones, relaciones o reglas. Esta salida debe significar un nuevo conocimiento para la organización, útil y comprensible para los usuarios finales, y que pueda ser integrado a los procesos para apoyar la toma de decisiones. Sin embargo, la mayor dificultad es justamente lograr que el analista de datos, que interviene en todo este proceso, pueda identificar modelos lo cual es una tarea compleja y muchas veces requiere de la experiencia, no sólo del analista de datos, sino que también del experto en el dominio del problema. Una forma de apoyar el análisis de datos, modelos y patrones es a través de su representación visual, utilizando las capacidades de percepción visual del ser humano, la cual puede detectar patrones con mayor facilidad. Bajo este enfoque, la visualización ha sido utilizada en minería datos, mayormente en el análisis descriptivo de los datos (entrada) y en la presentación de los patrones (salida), dejando limitado este paradigma para el análisis de modelos. El presente documento describe el desarrollo de la Tesis Doctoral denominada “Nuevos Esquemas de Visualizaciones para Mejorar la Comprensibilidad de Modelos de Data Mining”. Esta investigación busca aportar con un enfoque de visualización para apoyar la comprensión de modelos minería de datos, para esto propone la metáfora de modelos visualmente aumentados. ABSTRACT The large amount of data to be recorded daily in the systems database of organizations has generated the need to analyze it. However, faced with the complexity of processing huge volumes of data over traditional methods of analysis. Moreover, in a globalized and competitive environment organizations are kept constantly looking to improve their processes, which require tools that allow them to make better decisions. This involves being bettered informed and knows your digital story to describe its processes and to anticipate (predict) unanticipated events. These new requirements of data analysis, has led to the increasing development of data-mining projects. The data-mining process seeks to obtain from a massive data set, models to describe the data or predict new instances in the set. It involves steps of data preparation, partially or fully automated processing to identify patterns in the data, and then get output patterns, relationships or rules. This output must mean new knowledge for the organization, useful and understandable for end users, and can be integrated into the process to support decision-making. However, the biggest challenge is just getting the data analyst involved in this process, which can identify models is complex and often requires experience not only of the data analyst, but also the expert in the problem domain. One way to support the analysis of the data, models and patterns, is through its visual representation, i.e., using the capabilities of human visual perception, which can detect patterns easily in any context. Under this approach, the visualization has been used in data mining, mostly in exploratory data analysis (input) and the presentation of the patterns (output), leaving limited this paradigm for analyzing models. This document describes the development of the doctoral thesis entitled "New Visualizations Schemes to Improve Understandability of Data-Mining Models". This research aims to provide a visualization approach to support understanding of data mining models for this proposed metaphor visually enhanced models.
Resumo:
A pseudoknot formed by a long-range interaction in the mRNA of the initiation factor 3 (IF3) operon is involved in the translational repression of the gene encoding ribosomal protein L35 by another ribosomal protein, L20. The nucleotides forming the 5′ strand of the key stem of the pseudoknot are located within the gene for IF3, whereas those forming the 3′ strand are located 280 nt downstream, immediately upstream of the Shine–Dalgarno sequence of the gene for L35. Here we show that premature termination of IF3 translation at a nonsense codon introduced upstream of the pseudoknot results in a substantial enhancement of L20-mediated repression of L35 expression. Conversely, an increase of IF3 translation decreases repression. These results, in addition to an analysis of the effect of mutations in sequences forming the pseudoknot, indicate that IF3 translation decreases L20-mediated repression of L35 expression. We propose that ribosomes translating IF3 disrupt the pseudoknot and thereby attenuate repression. The result is a novel type of translational coupling, where unfolding of the pseudoknot by ribosomes translating IF3 does not increase expression of L35 directly, but alleviates its repression by L20.
RGS proteins reconstitute the rapid gating kinetics of Gβγ-activated inwardly rectifying K+ channels
Resumo:
G protein-gated inward rectifier K+ (GIRK) channels mediate hyperpolarizing postsynaptic potentials in the nervous system and in the heart during activation of Gα(i/o)-coupled receptors. In neurons and cardiac atrial cells the time course for receptor-mediated GIRK current deactivation is 20–40 times faster than that observed in heterologous systems expressing cloned receptors and GIRK channels, suggesting that an additional component(s) is required to confer the rapid kinetic properties of the native transduction pathway. We report here that heterologous expression of “regulators of G protein signaling” (RGS proteins), along with cloned G protein-coupled receptors and GIRK channels, reconstitutes the temporal properties of the native receptor → GIRK signal transduction pathway. GIRK current waveforms evoked by agonist activation of muscarinic m2 receptors or serotonin 1A receptors were dramatically accelerated by coexpression of either RGS1, RGS3, or RGS4, but not RGS2. For the brain-expressed RGS4 isoform, neither the current amplitude nor the steady-state agonist dose-response relationship was significantly affected by RGS expression, although the agonist-independent “basal” GIRK current was suppressed by ≈40%. Because GIRK activation and deactivation kinetics are the limiting rates for the onset and termination of “slow” postsynaptic inhibitory currents in neurons and atrial cells, RGS proteins may play crucial roles in the timing of information transfer within the brain and to peripheral tissues.
Resumo:
During reverse transcription of retroviral RNA, synthesis of (−) strand DNA is primed by a cellular tRNA that anneals to an 18-nt primer binding site within the 5′ long terminal repeat. For (+) strand synthesis using a (−) strand DNA template linked to the tRNA primer, only the first 18 nt of tRNA are replicated to regenerate the primer binding site, creating the (+) strand strong stop DNA intermediate and providing a 3′ terminus capable of strand transfer and further elongation. On model HIV templates that approximate the (−) strand linked to natural modified or synthetic unmodified tRNA3Lys, we find that a (+) strand strong stop intermediate of the proper length is generated only on templates containing the natural, modified tRNA3Lys, suggesting that a posttranscriptional modification provides the termination signal. In the presence of a recipient template, synthesis after strand transfer occurs only from intermediates generated from templates containing modified tRNA3Lys. Reverse transcriptase from Moloney murine leukemia virus and avian myoblastosis virus shows the same requirement for a modified tRNA3Lys template. Because all retroviral tRNA primers contain the same 1-methyl-A58 modification, our results suggest that 1-methyl-A58 is generally required for termination of replication 18 nt into the tRNA sequence, generating the (+) strand intermediate, strand transfer, and subsequent synthesis of the entire (+) strand. The possibility that the host methyl transferase responsible for methylating A58 may provide a target for HIV chemotherapy is discussed.
Resumo:
Premature termination of protein synthesis by nonsense mutations is at the molecular origin of a number of inherited disorders in the family of G protein-coupled seven-helix receptor proteins. To understand how such truncated polypeptides are processed by the cell, we have carried out COS-1 cell expression studies of mutants of bovine rhodopsin truncated at the first 1, 1.5, 2, 3, or 5 transmembrane segments (TMS) of the seven present in wild-type opsin. Our experiments show that successful completion of different stages in the cellular processing of the protein [membrane insertion, N-linked glycosylation, stability to proteolytic degradation, and transport from the endoplasmic reticulum (ER) membrane] requires progressively longer lengths of the polypeptide chain. Thus, none of the truncations affected the ability of the polypeptides to be integral membrane proteins. C-terminal truncations that generated polypeptides with fewer than two TMS resulted in misorientation and prevented glycosylation at the N terminus, whereas truncations that generated polypeptides with fewer than five TMS greatly destabilized the protein. However, all of the truncations prevented exit of the polypeptide from the ER. We conclude that during the biogenesis of rhodopsin, proper integration into the ER membrane occurs only after the synthesis of at least two TMS is completed. Synthesis of the next three TMS confers a gradual increase in stability, whereas the presence of more than five TMS is necessary for exit from the ER.
Resumo:
The oligomerization of activated d- and l- and racemic guanosine-5′-phosphoro-2-methylimidazole on short templates containing d- and l-deoxycytidylate has been studied. Results obtained with d-oligo(dC)s as templates are similar to those previously reported for experiments with a poly(C) template. When one l-dC or two consecutive l-dCs are introduced into a d-template, regiospecific synthesis of 3′-5′ oligo(G)s proceeds to the end of the template, but three consecutive l-dCs block synthesis. Alternating d-,l-oligomers do not facilitate oligomerization of the d-, l-, and racemic 2-guanosine-5′-phosphoro-2-methylimidazole. We suggest that once a “predominately d-metabolism” existed, occasional l-residues in a template would not have led to the termination of self-replication.
Resumo:
To determine the mechanisms responsible for the termination of Ca2+-activated Cl− currents (ICl(Ca)), simultaneous measurements of whole cell currents and intracellular Ca2+ concentration ([Ca2+]i) were made in equine tracheal myocytes. In nondialyzed cells, or cells dialyzed with 1 mM ATP, ICl(Ca) decayed before the [Ca2+]i decline, whereas the calcium-activated potassium current decayed at the same rate as [Ca2+]i. Substitution of AMP-PNP or ADP for ATP markedly prolonged the decay of ICl(Ca), resulting in a rate of current decay similar to that of the fall in [Ca2+]i. In the presence of ATP, dialysis of the calmodulin antagonist W7, the Ca2+/calmodulin-dependent kinase II (CaMKII) inhibitor KN93, or a CaMKII-specific peptide inhibitor the rate of ICl(Ca) decay was slowed and matched the [Ca2+]i decline, whereas H7, a nonspecific kinase inhibitor with low affinity for CaMKII, was without effect. When a sustained increase in [Ca2+]i was produced in ATP dialyzed cells, the current decayed completely, whereas in cells loaded with 5′-adenylylimidodiphosphate (AMP-PNP), KN93, or the CaMKII inhibitory peptide, ICl(Ca) did not decay. Slowly decaying currents were repeatedly evoked in ADP- or AMP-PNP-loaded cells, but dialysis of adenosine 5′-O-(3-thiotriphosphate) or okadaic acid resulted in a smaller initial ICl(Ca), and little or no current (despite a normal [Ca2+]i transient) with a second stimulation. These data indicate that CaMKII phosphorylation results in the inactivation of calcium-activated chloride channels, and that transition from the inactivated state to the closed state requires protein dephosphorylation.
Resumo:
Arrestins are regulatory proteins that participate in the termination of G protein-mediated signal transduction. The major arrestin in the Drosophila visual system, Arrestin 2 (Arr2), is phosphorylated in a light-dependent manner by a Ca2+/calmodulin-dependent protein kinase and has been shown to be essential for the termination of the visual signaling cascade in vivo. Here, we report the isolation of nine alleles of the Drosophila photoreceptor cell-specific arr2 gene. Flies carrying each of these alleles underwent light-dependent retinal degeneration and displayed electrophysiological defects typical of previously identified arrestin mutants, including an allele encoding a protein that lacks the major Ca2+/calmodulin-dependent protein kinase site. The phosphorylation mutant had very low levels of phosphorylation and lacked the light-dependent phosphorylation observed with wild-type Arr2. Interestingly, we found that the Arr2 phosphorylation mutant was still capable of binding to rhodopsin; however, it was unable to release from membranes once rhodopsin had converted back to its inactive form. This finding suggests that phosphorylation of arrestin is necessary for the release of arrestin from rhodopsin. We propose that the sequestering of arrestin to membranes is a possible mechanism for retinal disease associated with previously identified rhodopsin alleles in humans.
Resumo:
Transcriptional termination of the GAL10 gene in Saccharomyces cerevisiae depends on the efficiency of polyadenylation. Either cis mutations in the poly(A) signal or trans mutations of mRNA 3′ end cleavage factors result in GAL10 read-through transcripts into the adjacent GAL7 gene and inactivation (occlusion) of the GAL7 promoter. Herein, we present a molecular explanation of this transcriptional interference phenomenon. In vivo footprinting data reveal that GAL7 promoter occlusion is associated with the displacement of Gal4p transcription factors from the promoter. Interestingly, overexpression of Gal4p restores promoter occupancy, activates GAL7 expression, and rescues growth on the otherwise toxic galactose substrate. Our data therefore demonstrate a precise balance between transcriptional interference and initiation.
Resumo:
In cardiac myocytes Ca2+ cross-signaling between Ca2+ channels and ryanodine receptors takes place by exchange of Ca2+ signals in microdomains surrounding dyadic junctions, allowing first the activation and then the inactivation of the two Ca2+-transporting proteins. To explore the details of Ca2+ signaling between the two sets of receptors we measured the two-dimensional cellular distribution of Ca2+ at 240 Hz by using a novel confocal imaging technique. Ca2+ channel-triggered Ca2+ transients could be resolved into dynamic “Ca2+ stripes” composed of hundreds of discrete focal Ca2+ releases, appearing as bright fluorescence spots (radius ≅ 0.5 μm) at reproducible sites, which often coincided with t-tubules as visualized with fluorescent staining of the cell membrane. Focal Ca2+ releases triggered stochastically by Ca2+ current (ICa) changed little in duration (≅7 ms) and size (≅100,000 Ca ions) between −40 and +60 mV, but their frequency of activation and first latency mirrored the kinetics and voltage dependence of ICa. The resolution of 0.95 ± 0.13 reproducible focal Ca2+ release sites per μm3 in highly Ca2+-buffered cells, where diffusion of Ca2+ is limited to 50 nm, suggests the presence of about one independent, functional Ca2+ release site per half sarcomere. The density and distribution of Ca2+ release sites suggest they correspond to dyadic junctions. The abrupt onset and termination of focal Ca2+ releases indicate that the cluster of ryanodine receptors in individual dyadic junctions may operate in a coordinated fashion.