938 resultados para front end studies
Resumo:
For the past 20 years, dynamic analysis of shells has been one of the most fascinating fields for research. Using the new light materials the building engineer soon discovered that the subsequent reduction of gravity forces produced not only the desired shape freedom but the appearance of ecologic loads as the first factor of design; loads which present strong random properties and marked dynamic influence. On the other hand, the technological advance in the aeronautical and astronautical field placed the engineers in front of shell structures of nonconventional shape and able to sustain substantialy dynamic loads. The response to the increasingly challenger problems of the last two decades has been very bright; new forms, new materials and new methods of analysis have arosen in the design of off-shore platforms, nuclear vessels, space crafts, etc. Thanks to the intensity of the lived years we have at our disposition a coherent and homogeneous amount of knowledge which enable us to face problems of inconceivable complexity when IASS was founded. The open minded approach to classical problems and the impact of the computer are, probably, important factors in the Renaissance we have enjoyed these years, and a good proof of this are the papers presented to the previous IASS meetings as well as that we are going to consider in this one. Particularly striking is the great number of papers based on a mathematical modeling in front of the meagerness of those treating laboratory experiments on physical models. The universal entering of the computer into almost every phase of our lifes, and the cost of physical models, are –may be- reasons for this lack of experimental methods. Nevertheless they continue offering useful results as are those obtained with the shaking-table in which the computer plays an essential role in the application of loads as well as in the instantaneous treatment of control data. Plates 1 and 2 record the papers presented under dynamic heading, 40% of them are from Japan in good correlation with the relevance that Japanese research has traditionally showed in this area. Also interesting is to find old friends as profesors Tanaka, Nishimura and Kostem who presented valuable papers in previous IASS conferences. As we see there are papers representative of all tendencies, even purely analytical! Better than discuss them in detail, which can be done after the authors presentation, I think we can comment in the general pattern of the dynamical approach are summarized in plate 3.
Resumo:
En la actualidad, las técnicas de crioconservación poseen una importancia creciente para el almacenamiento a largo plazo de germoplasma vegetal. En las dos últimas décadas, estos métodos experimentaron un gran desarrollo y se han elaborado protocolos adecuados a diferentes sistemas vegetales, utilizando diversas estrategias como la vitrificación, la encapsulación-desecación con cuentas de alginato y el método de “droplet”-vitrificación. La presente tesis doctoral tiene como objetivo aumentar el conocimiento sobre los procesos implicados en los distintos pasos de un protocolo de crioconservación, en relación con el estado del agua presente en los tejidos y sus cambios, abordado mediante diversas técnicas biofísicas, principalmente calorimetría diferencial de barrido (DSC) y microscopía electrónica de barrido a baja temperatura (crio-SEM). En un primer estudio sobre estos métodos de crioconservación, se describen las fases de enfriamiento hasta la temperatura del nitrógeno líquido y de calentamiento hasta temperatura ambiente, al final del periodo de almacenamiento, que son críticas para la supervivencia del material crioconservado. Tanto enfriamiento como calentamiento deben ser realizados lo más rápidamente posible pues, aunque los bajos contenidos en agua logrados en etapas previas de los protocolos reducen significativamente las probabilidades de formación de hielo, éstas no son del todo nulas. En ese contexto, se analiza también la influencia de las velocidades de enfriamiento y calentamiento de las soluciones de crioconservación de plantas en sus parámetros termofísicos referente a la vitrificación, en relación su composición y concentración de compuestos. Estas soluciones son empleadas en la mayor parte de los protocolos actualmente utilizados para la crioconservación de material vegetal. Además, se estudia la influencia de otros factores que pueden determinar la estabilidad del material vitrificado, tales como en envejecimiento del vidrio. Se ha llevado a cabo una investigación experimental en el empleo del crio-SEM como una herramienta para visualizar el estado vítreo de las células y tejidos sometidos a los procesos de crioconservación. Se ha comparado con la más conocida técnica de calorimetría diferencial de barrido, obteniéndose resultados muy concordantes y complementarios. Se exploró también por estas técnicas el efecto sobre tejidos vegetales de la adaptación a bajas temperaturas y de la deshidratación inducida por los diferentes tratamientos utilizados en los protocolos. Este estudio permite observar la evolución biofísica de los sistemas en el proceso de crioconservación. Por último, se estudió la aplicación de películas de quitosano en las cuentas de alginato utilizadas en el protocolo de encapsulación. No se observaron cambios significativos en su comportamiento frente a la deshidratación, en sus parámetros calorimétricos y en la superficie de las cuentas. Su aplicación puede conferir propiedades adicionales prometedoras. ABSTRACT Currently, cryopreservation techniques have a growing importance for long term plant germplasm storage. These methods have undergone great progress during the last two decades, and adequate protocols for different plant systems have been developed, making use of diverse strategies, such as vitrification, encapsulation-dehydration with alginate beads and the dropletvitrification method. This PhD thesis has the goal of increasing the knowledge on the processes underlying the different steps of cryopreservation protocols, in relation with the state of water on tissues and its changes, approached through diverse biophysical techniques, especially differential scanning calorimetry (DSC) and low-temperature scanning electron microscopy (cryo-SEM). The processes of cooling to liquid nitrogen temperature and warming to room temperature, at the end of the storage period, critical for the survival of the cryopreserved material, are described in a first study on these cryopreservation methods. Both cooling and warming must be carried out as quickly as possible because, although the low water content achieved during previous protocol steps significantly reduces ice formation probability, it does not completely disappear. Within this context, the influence of plant vitrification solutions cooling and warming rate on their vitrification related thermophysical parameters is also analyzed, in relation to its composition and component concentration. These solutions are used in most of the currently employed plant material cryopreservation protocols. Additionally, the influence of other factors determining the stability of vitrified material is studied, such as glass aging. An experimental research work has been carried out on the use of cryo-SEM as a tool for visualizing the glassy state in cells and tissues, submitted to cryopreservation processes. It has been compared with the better known differential scanning calorimetry technique, and results in good agreement and complementary have been obtained. The effect on plant tissues of adaptation to low temperature and of the dehydration induced by the different treatments used in the protocols was explored also by these techniques. This study allows observation of the system biophysical evolution in the cryopreservation process. Lastly, the potential use of an additional chitosan film over the alginate beads used in encapsulation protocols was examined. No significant changes could be observed in its dehydration and calorimetric behavior, as well as in its surface aspect; its application for conferring additional properties to gel beads is promising.
Resumo:
A participatory modelling process has been conducted in two areas of the Guadiana river (the upper and the middle sub-basins), in Spain, with the aim of providing support for decision making in the water management field. The area has a semi-arid climate where irrigated agriculture plays a key role in the economic development of the region and accounts for around 90% of water use. Following the guidelines of the European Water Framework Directive, we promote stakeholder involvement in water management with the aim to achieve an improved understanding of the water system and to encourage the exchange of knowledge and views between stakeholders in order to help building a shared vision of the system. At the same time, the resulting models, which integrate the different sectors and views, provide some insight of the impacts that different management options and possible future scenarios could have. The methodology is based on a Bayesian network combined with an economic model and, in the middle Guadiana sub-basin, with a crop model. The resulting integrated modelling framework is used to simulate possible water policy, market and climate scenarios to find out the impacts of those scenarios on farm income and on the environment. At the end of the modelling process, an evaluation questionnaire was filled by participants in both sub-basins. Results show that this type of processes are found very helpful by stakeholders to improve the system understanding, to understand each others views and to reduce conflict when it exists. In addition, they found the model an extremely useful tool to support management. The graphical interface, the quantitative output and the explicit representation of uncertainty helped stakeholders to better understand the implications of the scenario tested. Finally, the combination of different types of models was also found very useful, as it allowed exploring in detail specific aspects of the water management problems.
Resumo:
The present work studies the resistant of the concrete against magnesium sulfate (MgSO4) and compare the results with values obtained previously of the same concretes exposed to sodium sulfate (Na2SO4). Thus, it is possible analyze the influence of the cation type. To that end, four different concrete mixes were made with sulfur resistant cement and mineral admixtures (silica fume, fly ash and blast furnace slag). The concretes were submerged for different period in magnesium sulfate (MgSO4). After that, different tests were carried out to define mechanical and microstructural properties. The results obtained were compared with reference values of concretes cured in calcium hydroxide [Ca(OH)2]. According to the results, the concrete with blast furnace slag presented the best behavior front MgSO4, meanwhile the concretes with silica fume and fly ash were the most susceptible. The resistance of the concrete with blast furnace slag could be attributed to the characteristics of the hydrated silicates formed during the hydration time, which include aluminum in the chemical chain that hinder its chemical decomposition during the attack of magnesium. The magnesium sulfate solution was most aggressive than sodium sulfate solution. El presente trabajo estudia la resistencia de hormigones al ataque de sulfatos provenientes de sulfato magnésico (MgSO4) y compara estos valores con resultados previos de los mismos hormigones atacados con sulfato sódico (Na2SO4). De esta manera se estudia la interacción del catión que acompaña al ion sulfato durante su afectación a la matriz cementicia. Para lo anterior, se diseñaron cuatro dosificaciones empleando cementos sulforresistentes y adiciones minerales (humo de sílice, ceniza volante y escoria de alto horno). Los hormigones se sumergieron, por distintos periodos de tiempo, en disolución de sulfato magnésico (MgSO4) de concentración 1M, para después realizarles ensayos mecánicos y a nivel microestructural. Los valores obtenidos se compararon con los obtenidos en el hormigón de referencia curado en hidróxido cálcico. El hormigón con escoria de alto horno presentó el mejor comportamiento frente a MgSO4, siendo las mezclas de humo de sílice y ceniza volante las más susceptibles. La resistencia del hormigón con escoria se atribuye a las características de los silicatos hidratados formados durante la hidratación, los cuales incorporan aluminio en las cadenas impidiendo su descomposición ante un ataque por magnesio. El medio con sulfato magnésico mostro una mayor agresividad que el medio con sulfato sódico.
Resumo:
Planning and Comunity Development: Case Studies, presents the findings of the inter-university Seminar held on 28?29 July 2011 and organized by researchers from the Technical University of Madrid and the University of California, Berkeley, who were fortunate to have the presence of the renowned Professor John Friedmann. Professors, researchers and PhD students from our research groups presented their works as scientific communications that were enriched by the debate among the different researches who attended the Seminar. All of them appear in the picture below in front of the gate of Haviland Hall at UC Berkeley. This book analyses the concept of planning and its evolution so far, leading to the conceptualization of governance as an expression of the planning practice. It also studies the role of social capital and cooperation as tools for the community development. The conceptual analysis is complemented by the development of six case studies that put forward experiences of planning and community development carried out in diverse social and cultural contexts of Latin-America, Europe and North America. This publication comes after more than 20 years of work of the researchers that met at the seminar. Through their work in managing development initiatives, they have learned lessons and have contribute to shape their own body of teaching that develops and analyses the role of planning in public domain to promote community development. This knowledge is synthesized in the model Planning as Working With People, that shows that development is not effective unless is promoted in continuous collaboration with all the actors involved in the process.
Resumo:
HTTP adaptive streaming technology has become widely spread in multimedia services because of its ability to provide adaptation to characteristics of various viewing devices and dynamic network conditions. There are various studies targeting the optimization of adaptation strategy. However, in order to provide an optimal viewing experience to the end-user, it is crucial to get knowledge about the Quality of Experience (QoE) of different adaptation schemes. This paper overviews the state of the art concerning subjective evaluation of adaptive streaming QoE and highlights the challenges and open research questions related to QoE assessment.
Resumo:
El objetivo principal de esta tesis doctoral es profundizar en el análisis y diseño de un sistema inteligente para la predicción y control del acabado superficial en un proceso de fresado a alta velocidad, basado fundamentalmente en clasificadores Bayesianos, con el prop´osito de desarrollar una metodolog´ıa que facilite el diseño de este tipo de sistemas. El sistema, cuyo propósito es posibilitar la predicción y control de la rugosidad superficial, se compone de un modelo aprendido a partir de datos experimentales con redes Bayesianas, que ayudar´a a comprender los procesos dinámicos involucrados en el mecanizado y las interacciones entre las variables relevantes. Dado que las redes neuronales artificiales son modelos ampliamente utilizados en procesos de corte de materiales, también se incluye un modelo para fresado usándolas, donde se introdujo la geometría y la dureza del material como variables novedosas hasta ahora no estudiadas en este contexto. Por lo tanto, una importante contribución en esta tesis son estos dos modelos para la predicción de la rugosidad superficial, que se comparan con respecto a diferentes aspectos: la influencia de las nuevas variables, los indicadores de evaluación del desempeño, interpretabilidad. Uno de los principales problemas en la modelización con clasificadores Bayesianos es la comprensión de las enormes tablas de probabilidad a posteriori producidas. Introducimos un m´etodo de explicación que genera un conjunto de reglas obtenidas de árboles de decisión. Estos árboles son inducidos a partir de un conjunto de datos simulados generados de las probabilidades a posteriori de la variable clase, calculadas con la red Bayesiana aprendida a partir de un conjunto de datos de entrenamiento. Por último, contribuimos en el campo multiobjetivo en el caso de que algunos de los objetivos no se puedan cuantificar en números reales, sino como funciones en intervalo de valores. Esto ocurre a menudo en aplicaciones de aprendizaje automático, especialmente las basadas en clasificación supervisada. En concreto, se extienden las ideas de dominancia y frontera de Pareto a esta situación. Su aplicación a los estudios de predicción de la rugosidad superficial en el caso de maximizar al mismo tiempo la sensibilidad y la especificidad del clasificador inducido de la red Bayesiana, y no solo maximizar la tasa de clasificación correcta. Los intervalos de estos dos objetivos provienen de un m´etodo de estimación honesta de ambos objetivos, como e.g. validación cruzada en k rodajas o bootstrap.---ABSTRACT---The main objective of this PhD Thesis is to go more deeply into the analysis and design of an intelligent system for surface roughness prediction and control in the end-milling machining process, based fundamentally on Bayesian network classifiers, with the aim of developing a methodology that makes easier the design of this type of systems. The system, whose purpose is to make possible the surface roughness prediction and control, consists of a model learnt from experimental data with the aid of Bayesian networks, that will help to understand the dynamic processes involved in the machining and the interactions among the relevant variables. Since artificial neural networks are models widely used in material cutting proceses, we include also an end-milling model using them, where the geometry and hardness of the piecework are introduced as novel variables not studied so far within this context. Thus, an important contribution in this thesis is these two models for surface roughness prediction, that are then compared with respecto to different aspects: influence of the new variables, performance evaluation metrics, interpretability. One of the main problems with Bayesian classifier-based modelling is the understanding of the enormous posterior probabilitiy tables produced. We introduce an explanation method that generates a set of rules obtained from decision trees. Such trees are induced from a simulated data set generated from the posterior probabilities of the class variable, calculated with the Bayesian network learned from a training data set. Finally, we contribute in the multi-objective field in the case that some of the objectives cannot be quantified as real numbers but as interval-valued functions. This often occurs in machine learning applications, especially those based on supervised classification. Specifically, the dominance and Pareto front ideas are extended to this setting. Its application to the surface roughness prediction studies the case of maximizing simultaneously the sensitivity and specificity of the induced Bayesian network classifier, rather than only maximizing the correct classification rate. Intervals in these two objectives come from a honest estimation method of both objectives, like e.g. k-fold cross-validation or bootstrap.
Resumo:
Fashion is one of the most vibrant sectors in Europe and important contributors to the European Union (EU) economy. In particular, Small and Medium Enterprises (SMEs) play a major part in European fashion industry (EU 2012). Just like fashion, where people¿s style has inherently meant to be shared as it is foremost a representation of one¿s self-image, social media allow the reflection of ones' personality and emotions. Although fashion practitioners have embraced social media in their marketing activities, it is still relatively few known at an academic level about the specificities of fashion industry when approaching social media marketing (SMM) strategies. This study sets out to explore fashion companies' SMM strategy and its activities. From an exploratory approach, we present case studies of two Spanish SME fashion companies, anonymously named hereafter as Company A and Company B, to deepen our understanding on how fashion brands implement their SMM strategy. Company A offers high-end fashion products while Company B produces medium fashion products. We analyzed the case studies using qualitative (interviews to companies' executives) and a mix of qualitative and quantitative (content analysis of companies' social media platform) methods. Public posts data of both companies' Facebook brand pages were used to perform the content analysis. Our findings through case studies of the two companies reveal that branding-oriented strategic objectives are the main drivers of their SMM implementations. There are significant differences between both companies. The main strategic action employed by Company A is engaging customers to participate into brand's offline social gathering events by inviting them through social media platform, while Company B focuses its effort on posting product promotion related contents and engaging influencers such as fashion bloggers. Our results are expected to serve as a basis of further investigations on how SMM strategy and strategic actions implemented by fashion brands may influence marketing outcomes.
Luz industrial e imagen tecnificada: de Moholy Nagy al C.A.V.S. (Center for Advanced Visual Studies)
Resumo:
El desarrollo de la tecnología de la luz implicará la transformación de la vida social, cultural y económica. Tanto las consideraciones espaciales del Movimiento Moderno, como los efectos producidos por la segunda Guerra Mundial, tendrán efectos visibles en las nuevas configuraciones espaciales y en la relación simbiótica y recíproca que se dará entre ideología y tecnología. La transformación en la comprensión de la articulación espacial, asociada al desarrollo tecnológico, afectará al modo en que este espacio es experimentado y percibido. El espacio expositivo y el espacio escénico se convertirán en laboratorio práctico donde desarrollar y hacer comprensible todo el potencial ilusorio de la luz, la proyección y la imagen, como parámetros modificadores y dinamizadores del espacio arquitectónico. Esta experimentación espacial estará precedida por la investigación y creación conceptual en el mundo plástico, donde los nuevos medios mecánicos serán responsables de la construcción de una nueva mirada moderna mediatizada por los elementos técnicos. La experimentación óptica, a través de la fotografía, el cine, o el movimiento de la luz y su percepción, vinculada a nuevos modos de representación y comunicación, se convertirá en elemento fundamental en la configuración espacial. Este ámbito de experimentación se hará patente en la Escuela de la Bauhaus, de la mano de Gropius, Schlemmer o Moholy Nagy entre otros; tanto en reflexiones teóricas como en el desarrollo de proyectos expositivos, arquitectónicos o teatrales, que evolucionarán en base a la tecnología y la modificación de la relación con el espectador. El espacio expositivo y el espacio escénico se tomarán como oportunidad de investigación espacial y de análisis de los modos de percepción, convirtiéndose en lugares de experimentación básicos para el aprendizaje. El teatro se postula como punto de encuentro entre el arte y la técnica, cobrando especial importancia la intersección con otras disciplinas en la definición espacial. Las múltiples innovaciones técnicas ligadas a los nuevos fundamentos teatrales en la modificación de la relación con la escena, que se producen a principios del siglo XX, tendrán como consecuencia la transformación del espacio en un espacio dinámico, tanto física como perceptivamente, que dará lugar a nuevas concepciones espaciales, muchas de ellas utópicas. La luz, la proyección y la creación de ilusión en base a estímulos visuales y sonoros, aparecen como elementos proyectuales efímeros e inmateriales, que tendrán una gran incidencia en el espacio y su modo de ser experimentado. La implicación de la tecnología en el arte conllevará modificaciones en la visualización, así como en la configuración espacial de los espacios destinados a esta. Destacaremos como propuesta el Teatro Total de Walter Gropius, en cuyo desarrollo se recogen de algún modo las experiencias espaciales y las investigaciones desarrolladas sobre la estructura formal de la percepción realizadas por Moholy Nagy, además de los conceptos acerca del espacio escénico desarrollados en el taller de Teatro de la Bauhaus por Oskar Schlemmer. En el Teatro Total, Gropius incorporará su propia visión de cuestiones que pertenecen a la tradición de la arquitectura teatral y las innovaciones conceptuales que estaban teniendo lugar desde finales del s.XIX, tales como la participación activa del público o la superación entre escena y auditorio, estableciendo en el proyecto una nueva relación perceptual entre sala, espectáculo y espectador; aumentando la sensación de inmersión, a través del uso de la física, la óptica, y la acústica, creando una energía concéntrica capaz de extenderse en todas direcciones. El Teatro Total será uno de los primeros ejemplos en los que desde el punto de partida del proyecto, se conjuga la imagen como elemento comunicativo con la configuración espacial. Las nuevas configuraciones escénicas tendrán como premisa de desarrollo la capacidad de transformación tanto perceptiva, como física. En la segunda mitad del s.XX, la creación de centros de investigación como el CAVS (The Center for Advanced Visual Studies,1967), o el EAT (Experiments in Art and Technology, 1966), favorecerán la colaboración interdisciplinar entre arte y ciencia, implicando a empresas de carácter tecnológico, como Siemens, HP, IBM o Philips, facilitando soporte técnico y económico para el desarrollo de nuevos sistemas. Esta colaboración interdisciplinar dará lugar a una serie de intervenciones espaciales que tendrán su mayor visibilidad en algunas Exposiciones Universales. El resultado será, en la mayoría de los casos, la creación de espacios de carácter inmersivo, donde se establecerá una relación simbiótica entre espacio, imagen, sonido, y espectador. La colocación del espectador en el centro de la escena y la disposición dinámica de imagen y sonido, crearán una particular narrativa espacial no lineal, concebida para la experiencia. Desde las primeras proyecciones de cine a la pantalla múltiple de los Eames, las técnicas espaciales de difusión del sonido en Stockhausen, o los experimentos con el movimiento físico interactivo, la imagen, la luz en movimiento y el sonido, quedan inevitablemente convertidos en material arquitectónico. ABSTRACT. Light technology development would lead to a social, cultural and economic transformation. Both spatial consideration of “Modern Movement” and Second World War effects on technology, would have a visible aftereffect on spatial configuration and on the symbiotic and mutual relationship between ideology & technology. Comprehension adjustment on the articulation of space together with technology development, would impact on how space is perceived and felt. Exhibition space and scenic space would turn into a laboratory where developing and making comprehensive all illusory potential of light, projection and image. These new parameters would modify and revitalize the architectonic space. as modifying and revitalizing parameters of architectonic space. Spatial experimentation would be preceded by conceptual creation and investigation on the sculptural field, where new mechanic media would be responsible for a fresh and modern look influenced by technical elements. Optical experimentation, through photography, cinema or light movement and its perception, would turn into essential components for spatial arrangement linked to new ways of performance and communication. This experimentation sphere would be clear at The Bauhaus School, by the hand of Gropius, Schlemmer or Moholy Nag among others; in theoretical, theatrical or architectural performance’s projects, that would evolve based on technology and also based on the transformation of the relationship with the observer. Exhibition and perfor-mance areas would be taken as opportunities of spatial investigation and for the analysis of the different ways of perception, thus becoming key places for learning. Theater is postulated as a meeting point between art and technique, taking on a new significance at its intersection with other disciplines working with spatial definition too. The multiple innovation techniques linked to the new foundations for the theater regarding stage relation, would have as a consequence the regeneration of the space. Space would turn dynamic, both physically and perceptibly, bringing innovative spatial conceptions, many of them unrealistic. Light, projection and illusory creation based on sound and visual stimulus would appear as intangible and momentary design components, which would have a great impact on the space and on the way it is experienced. Implication of technology in art would bring changes on the observer as well as on the spatial configuration of the art spaces2. It would stand out as a proposal Walter Groupis Total Theater, whose development would include somehow the spatial experiments and studies about formal structure of perception accomplished by Moholy Nagy besides the concepts regarding stage space enhanced at the Bauhaus Theater Studio by Oskar Schlemmer. Within Total Theater, Groupis would incorporate his own view about traditional theatric architecture and conceptual innovations that were taking place since the end of the nineteenth century, such as active audience participation or the diffusing limits between scene and audience, establishing a new perception relationship between auditorium, performance and audience, improving the feeling of immersion through the use of physics, optics and acoustics, creating a concentric energy capable of spreading in all directions. Total Theater would be one of the first example in which, from the beginning of the Project, image is combined as a communicating element with the spatial configuration. As a premise of development, new stage arrangement would have the capacity of transformation, both perceptive and physically. During the second half or the twentieth century, the creation of investigation centers such as CAVS (Center for Advanced Visual Studies, 1967) or EAT (Experiments in Art and Technology, 1966), would help to the interdisciplinary collaboration between art and science, involving technology companies like Siemens, HP, IBM or Philips, providing technical and economic support to the development of new systems. This interdisciplinary collaboration would give room to a series of spatial interventions which would have visibility in some Universal Exhibitions. The result would be, in most cases, the creation of immersive character spaces, where a symbiotic relationship would be stablished between space, image, sound and audience. The new location of the audience in the middle of the display, together with the dynamic arrangement of sound and image would create a particular, no lineal narrative conceived to be experienced. Since the first cinema projections, the multiple screen of Eames, the spatial techniques for sound dissemination at Stockhausen or the interactive physical movement experimentation, image, motion light and sound would turn inevitably into architectural material.
Resumo:
Advanced glycation end products (AGEs) are thought to contribute to the abnormal lipoprotein profiles and increased risk of cardiovascular disease of patients with diabetes and renal failure, in part by preventing apolipoprotein B (apoB)-mediated cellular uptake of low density lipoproteins (LDL) by LDL receptors (LDLr). It has been proposed that AGE modification at one site in apoB, almost 1,800 residues from the putative apoB LDLr-binding domain, may be sufficient to induce an apoB conformational change that prevents binding to the LDLr. To further explore this hypothesis, we used 29 anti-human apoB mAbs to identify other potential sites on apoB that may be modified by in vitro advanced glycation of LDL. Glycation of LDL caused a time-dependent decrease in its ability to bind to the LDLr and in the immunoreactivity of six distinct apoB epitopes, including two that flank the apoB LDLr-binding domain. ApoB appears to be modified at multiple sites by these criteria, as the loss of glycation-sensitive epitopes was detected on both native glycated LDL and denatured, delipidated glycated apoB. Moreover, residues directly within the putative apoB LDLr-binding site are not apparently modified in glycated LDL. We propose that the inability of LDL modified by AGEs to bind to the LDLr is caused by modification of residues adjacent to the putative LDLr-binding site that were undetected by previous immunochemical studies. AGE modification either eliminates the direct participation of the residues in LDLr binding or indirectly alters the conformation of the apoB LDLr-binding site.
Resumo:
We have investigated mRNA 3′-end-processing signals in each of six eukaryotic species (yeast, rice, arabidopsis, fruitfly, mouse, and human) through the analysis of more than 20,000 3′-expressed sequence tags. The use and conservation of the canonical AAUAAA element vary widely among the six species and are especially weak in plants and yeast. Even in the animal species, the AAUAAA signal does not appear to be as universal as indicated by previous studies. The abundance of single-base variants of AAUAAA correlates with their measured processing efficiencies. As found previously, the plant polyadenylation signals are more similar to those of yeast than to those of animals, with both common content and arrangement of the signal elements. In all species examined, the complete polyadenylation signal appears to consist of an aggregate of multiple elements. In light of these and previous results, we present a broadened concept of 3′-end-processing signals in which no single exact sequence element is universally required for processing. Rather, the total efficiency is a function of all elements and, importantly, an inefficient word in one element can be compensated for by strong words in other elements. These complex patterns indicate that effective tools to identify 3′-end-processing signals will require more than consensus sequence identification.
Resumo:
Cells of vertebrates remove DNA double-strand breaks (DSBs) from their genome predominantly utilizing a fast, DNA-PKcs-dependent form of non-homologous end joining (D-NHEJ). Mutants with inactive DNA-PKcs remove the majority of DNA DSBs utilizing a slow, DNA-PKcs-independent pathway that does not utilize genes of the RAD52 epistasis group, is error-prone and can therefore be classified as a form of NHEJ (termed basic or B-NHEJ). We studied the role of DNA ligase IV in these pathways of NHEJ. Although biochemical studies show physical and functional interactions between the DNA-PKcs/Ku and the DNA ligase IV/Xrcc4 complexes suggesting operation within the same pathway, genetic evidence to support this notion is lacking in mammalian cells. Primary human fibroblasts (180BR) with an inactivating mutation in DNA ligase IV, rejoined DNA DSBs predominantly with slow kinetics similar to those observed in cells deficient in DNA-PKcs, or in wild-type cells treated with wortmannin to inactivate DNA-PK. Treatment of 180BR cells with wortmannin had only a small effect on DNA DSB rejoining and no effect on cell radiosensitivity to killing although it sensitized control cells to 180BR levels. This is consistent with DNA ligase IV functioning as a component of the D-NHEJ, and demonstrates the unperturbed operation of the DNA-PKcs-independent pathway (B-NHEJ) at significantly reduced levels of DNA ligase IV. In vitro, extracts of 180BR cells supported end joining of restriction endonuclease-digested plasmid to the same degree as extracts of control cells when tested at 10 mM Mg2+. At 0.5 mM Mg2+, where only DNA ligase IV is expected to retain activity, low levels of end joining (∼10% of 10 mM) were seen in the control but there was no detectable activity in 180BR cells. Antibodies raised against DNA ligase IV did not measurably inhibit end joining at 10 mM Mg2+ in either cell line. Thus, in contrast to the situation in vivo, end joining in vitro is dominated by pathways with properties similar to B-NHEJ that do not display a strong dependence on DNA ligase IV, with D-NHEJ retaining only a limited contribution. The implications of these observations to studies of NHEJ in vivo and in vitro are discussed.
Resumo:
RNase E initiates the decay of Escherichia coli RNAs by cutting them internally near their 5′-end and is a component of the RNA degradosome complex, which also contains the 3′-exonuclease PNPase. Recently, RNase E has been shown to be able to remove poly(A) tails by what has been described as an exonucleolytic process that can be blocked by the presence of a phosphate group on the 3′-end of the RNA. We show here, however, that poly(A) tail removal by RNase E is in fact an endonucleolytic process that is regulated by the phosphorylation status at the 5′- but not the 3′-end of RNA. The rate of poly(A) tail removal by RNase E was found to be 30-fold greater when the 5′-terminus of RNA substrates was converted from a triphosphate to monophosphate group. This finding prompted us to re-analyse the contributions of the ribonucleolytic activities within the degradosome to 3′ attack since previous studies had only used substrates that had a triphosphate group on their 5′-end. Our results indicate that RNase E associated with the degradosome may contribute to the removal of poly(A) tails from 5′-monophosphorylated RNAs, but this is only likely to be significant should their attack by PNPase be blocked.
Resumo:
Protein phosphoaspartate bonds play a variety of roles. In response regulator proteins of two-component signal transduction systems, phosphorylation of an aspartate residue is coupled to a change from an inactive to an active conformation. In phosphatases and mutases of the haloacid dehalogenase (HAD) superfamily, phosphoaspartate serves as an intermediate in phosphotransfer reactions, and in P-type ATPases, also members of the HAD family, it serves in the conversion of chemical energy to ion gradients. In each case, lability of the phosphoaspartate linkage has hampered a detailed study of the phosphorylated form. For response regulators, this difficulty was recently overcome with a phosphate analog, BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document}, which yields persistent complexes with the active site aspartate of their receiver domains. We now extend the application of this analog to a HAD superfamily member by solving at 1.5-Å resolution the x-ray crystal structure of the complex of BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document} with phosphoserine phosphatase (PSP) from Methanococcus jannaschii. The structure is comparable to that of a phosphoenzyme intermediate: BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document} is bound to Asp-11 with the tetrahedral geometry of a phosphoryl group, is coordinated to Mg2+, and is bound to residues surrounding the active site that are conserved in the HAD superfamily. Comparison of the active sites of BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document}⋅PSP and BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document}⋅CeY, a receiver domain/response regulator, reveals striking similarities that provide insights into the function not only of PSP but also of P-type ATPases. Our results indicate that use of BeF\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}{\mathrm{_{3}^{-}}}\end{equation*}\end{document} for structural studies of proteins that form phosphoaspartate linkages will extend well beyond response regulators.
Resumo:
Floor plans and front and end elevations of Indian College drawn by H.R. Shurtleff in May 1934 based on research conducted by Shurtleff from the Harvard College Records and surveys of local period buildings. Shows likely configuration of Indian College with lodging for 20 students, studies, and the printing room which housed the printing press.