936 resultados para and biological systems with sources of variability
Resumo:
The hydrolysis and the reactivity of two dinuclear p-cymene ruthenium monothiolato complexes, [(η6-p-MeC6H4Pri)2Ru2Cl2(µ-Cl)(µ-S-m-9-B10C2H11)] (1) and [(η6-p-MeC6H4Pri)2¬Ru2Cl2(µ-Cl)¬(µ-S¬CH2-p-C6H4-NO2)] (2), and of two dinuclear p-cymene ruthenium dithiolato complexes, [(η6-p-MeC6H4Pri)2Ru2(µ-SCH2CH2Ph)2Cl2] (3) and [(η6-p-Me¬C6H4¬Pri)2¬Ru2(S¬CH2¬C6H4-p-O¬Me)2¬Cl2] (4) towards amino acids, nucleotides, and a single-stranded DNA dodecamer were studied using NMR and mass spectrometry. In aqueous solutions at 37 °C, the monothiolato com¬plexes 1 and 2 undergo rapid hydrolysis, irrespective of the pH value, the predominant species in D2O/acetone-d6 solution at equilibrium being the neutral hydroxo complexes [(η6-p-Me¬C6H4¬Pri)2Ru2(OD)2(µ-OD)(µ-SR)]. The dithiolato complexes 3 and 4 are stable in water under acidic conditions, but undergo slow hydrolysis under neutral and basic conditions. In both cases, the cationic hydroxo complexes [(η6-p-MeC6H4Pri)2Ru2(µ-SR)2¬(OD)¬(CD3CN)]+ are the only spe¬cies observed in D2O/CD3CN at equilibrium. Surprisingly, no adducts are observed upon addition of an excess of L-methionine or L-histidine to the aqueous solutions of the complexes. Upon addition of an excess of L-cysteine, on the other hand, 1 and 2 form the unusual cationic trithiolato complexes [(η6-p-MeC6H4Pri)2¬Ru2{µ-SCH2CH(NH2)COOH}2(µ-SR)]+ containing two bridging cysteinato li¬gands, while 3 and 4 yield cationic trithiolato complexes [(η6-p-MeC6H4Pri)2Ru2[µ-SCH2CH¬(NH2)COOH](µ-SR)2]+ containing one bridging cysteinato ligand. A representative of catio¬nic trithiolato complexes containing a cysteinato bridge of this type, [(η6-p-MeC6H4Pri)2¬Ru2[µ-S¬CH2CH(NH2)COOH](µ-SCH2-p-C6H4-But)2]+ (6) could be synthesised from the di¬thiolato complex [(η6-p-Me¬C6H4¬Pri)2-Ru2(S¬CH2¬C6H4-p-But)2Cl2] (5), isolated as the tetra¬fluo¬ro¬borate salt and fully characterised. Moreover, the mono- and dithiolato complexes 1 - 4 are inert toward nucleotides and DNA, suggesting that DNA is not a target of cytotoxic thiolato-bridged arene ruthenium complexes. In contrast to the trithiolato complexes, monothiolato and dithio¬lato complexes hydrolyse and react with L-cysteine. These results may have im¬portant implications for the mode of action of thiolato-bridged dinuclear arene ruthenium drug candidates, and suggest that their modes of action are different to those of other arene ruthenium complexes.
Resumo:
Interleukin 4 (IL-4) is a pleotropic cytokine affecting a wide range of cell types in both the mouse and the human. These activities include regulation of the growth and differentiation of both T and B lymphocytes. The activities of IL-4 in nonprimate, nonmurine systems are not well established. Herein, we demonstrate in the bovine system that IL-4 upregulates production of IgM, IgG1, and IgE in the presence of a variety of costimulators including anti-IgM, Staphylococcus aureus cowan strain I, and pokeweed mitogen. IgE responses are potentiated by the addition of IL-2 to IL-4. Culture of bovine B lymphocytes with IL-4 in the absence of additional costimulators resulted in the increased surface expression of CD23 (low-affinity Fc epsilon RII), IgM, IL-2R, and MHC class II in a dose-dependent manner. IL-4 alone increased basal levels of proliferation of bulk peripheral blood mononuclear cells but in the presence of Con A inhibited proliferation. In contrast to the activities of IL-4 in the murine system, proliferation of TH1- and TH2-like clones was inhibited in a dose-dependent manner as assessed by antigen-or IL-2-driven in vitro proliferative responses. These observations are consistent with the role of IL-4 as a key player in regulation of both T and B cell responses.
Resumo:
by A. B. Granville
Resumo:
The occurrence of waste pharmaceuticals has been identified and well documented in water sources throughout North America and Europe. Many studies have been conducted which identify the occurrence of various pharmaceutical compounds in these waters. This project is an extensive review of the documented evidence of this occurrence published in the scientific literature. This review was performed to determine if this occurrence has a significant impact on the environment and public health. This project and review found that pharmaceuticals such as sex hormone drugs, antibiotic drugs and antineoplastic/cytostatic agents as well as their metabolites have been found to occur in water sources throughout the United States at levels high enough to have noticeable impacts on human health and the environment. It was determined that the primary sources of this occurrence of pharmaceuticals were waste water effluent and solid wastes from sewage treatment plants, pharmaceutical manufacturing plants, healthcare and biomedical research facilities, as well as runoff from veterinary medicine applications (including aquaculture). ^ In addition, current public policies of US governmental agencies such as the Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and Drug Enforcement Agency (DEA) have been evaluated to see if they are doing a sufficient job at controlling this issue. Specific recommendations for developing these EPA, FDA, and DEA policies have been made to mitigate, prevent, or eliminate this issue.^ Other possible interventions such as implementing engineering controls were also evaluated in order to mitigate, prevent and eliminate this issue. These engineering controls include implementing improved current treatment technologies such as the advancement and improvement of waste water treatment processes utilized by conventional sewage treatment and pharmaceutical manufacturing plants. In addition, administrative controls such as the use of “green chemistry” in drug synthesis and design were also explored and evaluated as possible alternatives to mitigate, prevent, or eliminate this issue. Specific recommendations for incorporating these engineering and administrative controls into the applicable EPA, FDA, and DEA policies have also been made.^
Resumo:
This dissertation develops and tests through path analysis a theoretical model to explain how socioeconomic, socioenvironmental, and biologic risk factors simultaneously influence each other to further produce short-term, depressed growth in preschoolers. Three areas of risk factors were identified: child's proximal environment, maturational stage, and biological vulnerability. The theoretical model represented both the conceptual framework and the nature and direction of the hypotheses. Original research completed in 1978-80 and in 1982 provided the background data. It was analyzed first by nested-analysis of variance, followed by path analysis. The study provided evidence of mild iron deficiency and gastrointestinal symptomatology in the etiology of depressed, short-term weight gain. Also, there was evidence suggesting that family resources for material and social survival significantly contribute to the variability of short-term, age-adjusted growth velocity. These results challenge current views of unifocal intervention, whether for prevention or control. For policy formulations, though, the mechanisms underlying any set of interlaced relationships must be decoded. Theoretical formulations here proposed should be reassessed under a more extensive research design. It is suggested that studies should be undertaken where social changes are actually in progress; otherwise, nutritional epidemiology in developing countries operates somewhere between social reality and research concepts, with little grasp of its real potential. The study stresses that there is a connection between substantive theory, empirical observation, and policy issues. ^
Resumo:
Estuarine organisms are exposed to periodic strong fluctuations in seawater pH driven by biological carbon dioxide (CO2) production, which may in the future be further exacerbated by the ocean acidification associated with the global rise in CO2. Calcium carbonate-producing marine species such as mollusks are expected to be vulnerable to acidification of estuarine waters, since elevated CO2 concentration and lower pH lead to a decrease in the degree of saturation of water with respect to calcium carbonate, potentially affecting biomineralization. Our study demonstrates that the increase in CO2 partial pressure (pCO2) in seawater and associated decrease in pH within the environmentally relevant range for estuaries have negative effects on physiology, rates of shell deposition and mechanical properties of the shells of eastern oysters Crassostrea virginica (Gmelin). High CO2 levels (pH ~7.5, pCO2 ~3500 µatm) caused significant increases in juvenile mortality rates and inhibited both shell and soft-body growth compared to the control conditions (pH ~8.2, pCO2 ~380 µatm). Furthermore, elevated CO2 concentrations resulted in higher standard metabolic rates in oyster juveniles, likely due to the higher energy cost of homeostasis. The high CO2 conditions also led to changes in the ultrastructure and mechanical properties of shells, including increased thickness of the calcite laths within the hypostracum and reduced hardness and fracture toughness of the shells, indicating that elevated CO2 levels have negative effects on the biomineralization process. These data strongly suggest that the rise in CO2 can impact physiology and biomineralization in marine calcifiers such as eastern oysters, threatening their survival and potentially leading to profound ecological and economic impacts in estuarine ecosystems.
Resumo:
Four strains of the coccolithophore E. huxleyi (RCC1212, RCC1216, RCC1238, RCC1256) were grown in dilute batch culture at four CO2 levels ranging from ~200 µatm to ~1200 µatm. Growth rate, particulate organic carbon content, and particulate inorganic carbon content were measured, and organic and inorganic carbon production calculated. The four strains did not show a uniform response to carbonate chemistry changes in any of the analysed parameters and none of the four strains displayed a response pattern previously described for this species. We conclude that the sensitivity of different strains of E. huxleyi to acidification differs substantially and that this likely has a genetic basis. We propose that this can explain apparently contradictory results reported in the literature.
Resumo:
The combustion of fossil fuels has enriched levels of CO2 in the world's oceans and decreased ocean pH. Although the continuation of these processes may alter the growth, survival, and diversity of marine organisms that synthesize CaCO3shells, the effects of ocean acidification since the dawn of the industrial revolution are not clear. Here we present experiments that examined the effects of the ocean's past, present, and future (21st and 22nd centuries) CO2concentrations on the growth, survival, and condition of larvae of two species of commercially and ecologically valuable bivalve shellfish (Mercenaria mercenariaand Argopecten irradians). Larvae grown under near preindustrial CO2concentrations (250 ppm) displayed significantly faster growth and metamorphosis as well as higher survival and lipid accumulation rates compared with individuals reared under modern day CO2 levels. Bivalves grown under near preindustrial CO2 levels displayed thicker, more robust shells than individuals grown at present CO2 concentrations, whereas bivalves exposed to CO2 levels expected later this century had shells that were malformed and eroded. These results suggest that the ocean acidification that has occurred during the past two centuries may be inhibiting the development and survival of larval shellfish and contributing to global declines of some bivalve populations.
Seawater carbonate chemistry and biological processes of Porites panamensis during experiments, 2011
Resumo:
Survival of coral planulae, and the successful settlement and healthy growth of primary polyps are critical for the dispersal of scleractinian corals and hence the recovery of degraded coral reefs. It is therefore important to explore how the warmer and more acidic oceanic conditions predicted for the future could affect these processes. This study used controlled culture to investigate the effects of a 1 °C increase in temperature and a 0.2-0.25 unit decrease in pH on the settlement and survival of planulae and the growth of primary polyps in the Tropical Eastern Pacific coral Porites panamensis. We found that primary polyp growth was reduced only marginally by more acidic seawater but the combined effect of high temperature and lowered pH caused a significant reduction in growth of primary polyps by almost a third. Elevated temperature was found to significantly reduce the amount of zooxanthellae in primary polyps, and when combined with lowered pH resulted in a significant reduction in biomass of primary polyps. However, survival and settlement of planula larvae were unaffected by increased temperature, lowered acidity or the combination of both. These results indicate that in future scenarios of increased temperature and oceanic acidity coral planulae will be able to disperse and settle successfully but primary polyp growth may be hampered. The recovery of reefs may therefore be impeded by global change even if local stressors are curbed and sufficient sources of planulae are available.
Resumo:
This work studied the combined use of gliadins and SSRs to analyse inter- and intra-accession variability of the Spanish collection of cultivated einkorn (Triticum monococcum L. ssp. monococcum) maintained at the CRF-INIA. In general, gliadin loci presented higher discrimination power than SSRs, reflecting the high variability of the gliadins. The loci on chromosome 6A were the most polymorphic with similar PIC values for both marker systems, showing that these markers are very useful for genetic variability studies in wheat. The gliadin results indicated that the Spanish einkorn collection possessed high genetic diversity, being the differentiation large between varieties and small within them. Some associations between gliadin alleles and geographical and agro-morphological data were found. Agro-morphological relations were also observed in the clusters of the SSRs dendrogram. A high concordance was found between gliadins and SSRs for genotype identification. In addition, both systems provide complementary information to resolve the different cases of intra-accession variability not detected at the agro-morphological level, and to identify separately all the genotypes analysed. The combined use of both genetic markers is an excellent tool for genetic resource evaluation in addition to agro-morphological evaluation.
Resumo:
The design and development of spoken interaction systems has been a thoroughly studied research scope for the last decades. The aim is to obtain systems with the ability to interact with human agents with a high degree of naturalness and efficiency, allowing them to carry out the actions they desire using speech, as it is the most natural means of communication between humans. To achieve that degree of naturalness, it is not enough to endow systems with the ability to accurately understand the user’s utterances and to properly react to them, even considering the information provided by the user in his or her previous interactions. The system has also to be aware of the evolution of the conditions under which the interaction takes place, in order to act the most coherent way as possible at each moment. Consequently, one of the most important features of the system is that it has to be context-aware. This context awareness of the system can be reflected in the modification of the behaviour of the system taking into account the current situation of the interaction. For instance, the system should decide which action it has to carry out, or the way to perform it, depending on the user that requests it, on the way that the user addresses the system, on the characteristics of the environment in which the interaction takes place, and so on. In other words, the system has to adapt its behaviour to these evolving elements of the interaction. Moreover that adaptation has to be carried out, if possible, in such a way that the user: i) does not perceive that the system has to make any additional effort, or to devote interaction time to perform tasks other than carrying out the requested actions, and ii) does not have to provide the system with any additional information to carry out the adaptation, which could imply a lesser efficiency of the interaction, since users should devote several interactions only to allow the system to become adapted. In the state-of-the-art spoken dialogue systems, researchers have proposed several disparate strategies to adapt the elements of the system to different conditions of the interaction (such as the acoustic characteristics of a specific user’s speech, the actions previously requested, and so on). Nevertheless, to our knowledge there is not any consensus on the procedures to carry out these adaptation. The approaches are to an extent unrelated from one another, in the sense that each one considers different pieces of information, and the treatment of that information is different taking into account the adaptation carried out. In this regard, the main contributions of this Thesis are the following ones: Definition of a contextualization framework. We propose a unified approach that can cover any strategy to adapt the behaviour of a dialogue system to the conditions of the interaction (i.e. the context). In our theoretical definition of the contextualization framework we consider the system’s context as all the sources of variability present at any time of the interaction, either those ones related to the environment in which the interaction takes place, or to the human agent that addresses the system at each moment. Our proposal relies on three aspects that any contextualization approach should fulfill: plasticity (i.e. the system has to be able to modify its behaviour in the most proactive way taking into account the conditions under which the interaction takes place), adaptivity (i.e. the system has also to be able to consider the most appropriate sources of information at each moment, both environmental and user- and dialogue-dependent, to effectively adapt to the conditions aforementioned), and transparency (i.e. the system has to carry out the contextualizaton-related tasks in such a way that the user neither perceives them nor has to do any effort in providing the system with any information that it needs to perform that contextualization). Additionally, we could include a generality aspect to our proposed framework: the main features of the framework should be easy to adopt in any dialogue system, regardless of the solution proposed to manage the dialogue. Once we define the theoretical basis of our contextualization framework, we propose two cases of study on its application in a spoken dialogue system. We focus on two aspects of the interaction: the contextualization of the speech recognition models, and the incorporation of user-specific information into the dialogue flow. One of the modules of a dialogue system that is more prone to be contextualized is the speech recognition system. This module makes use of several models to emit a recognition hypothesis from the user’s speech signal. Generally speaking, a recognition system considers two types of models: an acoustic one (that models each of the phonemes that the recognition system has to consider) and a linguistic one (that models the sequences of words that make sense for the system). In this work we contextualize the language model of the recognition system in such a way that it takes into account the information provided by the user in both his or her current utterance and in the previous ones. These utterances convey information useful to help the system in the recognition of the next utterance. The contextualization approach that we propose consists of a dynamic adaptation of the language model that is used by the recognition system. We carry out this adaptation by means of a linear interpolation between several models. Instead of training the best interpolation weights, we make them dependent on the conditions of the dialogue. In our approach, the system itself will obtain these weights as a function of the reliability of the different elements of information available, such as the semantic concepts extracted from the user’s utterance, the actions that he or she wants to carry out, the information provided in the previous interactions, and so on. One of the aspects more frequently addressed in Human-Computer Interaction research is the inclusion of user specific characteristics into the information structures managed by the system. The idea is to take into account the features that make each user different from the others in order to offer to each particular user different services (or the same service, but in a different way). We could consider this approach as a user-dependent contextualization of the system. In our work we propose the definition of a user model that contains all the information of each user that could be potentially useful to the system at a given moment of the interaction. In particular we will analyze the actions that each user carries out throughout his or her interaction. The objective is to determine which of these actions become the preferences of that user. We represent the specific information of each user as a feature vector. Each of the characteristics that the system will take into account has a confidence score associated. With these elements, we propose a probabilistic definition of a user preference, as the action whose likelihood of being addressed by the user is greater than the one for the rest of actions. To include the user dependent information into the dialogue flow, we modify the information structures on which the dialogue manager relies to retrieve information that could be needed to solve the actions addressed by the user. Usage preferences become another source of contextual information that will be considered by the system towards a more efficient interaction (since the new information source will help to decrease the need of the system to ask users for additional information, thus reducing the number of turns needed to carry out a specific action). To test the benefits of the contextualization framework that we propose, we carry out an evaluation of the two strategies aforementioned. We gather several performance metrics, both objective and subjective, that allow us to compare the improvements of a contextualized system against the baseline one. We will also gather the user’s opinions as regards their perceptions on the behaviour of the system, and its degree of adaptation to the specific features of each interaction. Resumen El diseño y el desarrollo de sistemas de interacción hablada ha sido objeto de profundo estudio durante las pasadas décadas. El propósito es la consecución de sistemas con la capacidad de interactuar con agentes humanos con un alto grado de eficiencia y naturalidad. De esta manera, los usuarios pueden desempeñar las tareas que deseen empleando la voz, que es el medio de comunicación más natural para los humanos. A fin de alcanzar el grado de naturalidad deseado, no basta con dotar a los sistemas de la abilidad de comprender las intervenciones de los usuarios y reaccionar a ellas de manera apropiada (teniendo en consideración, incluso, la información proporcionada en previas interacciones). Adicionalmente, el sistema ha de ser consciente de las condiciones bajo las cuales transcurre la interacción, así como de la evolución de las mismas, de tal manera que pueda actuar de la manera más coherente en cada instante de la interacción. En consecuencia, una de las características primordiales del sistema es que debe ser sensible al contexto. Esta capacidad del sistema de conocer y emplear el contexto de la interacción puede verse reflejada en la modificación de su comportamiento debida a las características actuales de la interacción. Por ejemplo, el sistema debería decidir cuál es la acción más apropiada, o la mejor manera de llevarla a término, dependiendo del usuario que la solicita, del modo en el que lo hace, etcétera. En otras palabras, el sistema ha de adaptar su comportamiento a tales elementos mutables (o dinámicos) de la interacción. Dos características adicionales son requeridas a dicha adaptación: i) el usuario no ha de percibir que el sistema dedica recursos (temporales o computacionales) a realizar tareas distintas a las que aquél le solicita, y ii) el usuario no ha de dedicar esfuerzo alguno a proporcionar al sistema información adicional para llevar a cabo la interacción. Esto último implicaría una menor eficiencia de la interacción, puesto que los usuarios deberían dedicar parte de la misma a proporcionar información al sistema para su adaptación, sin ningún beneficio inmediato. En los sistemas de diálogo hablado propuestos en la literatura, se han propuesto diferentes estrategias para llevar a cabo la adaptación de los elementos del sistema a las diferentes condiciones de la interacción (tales como las características acústicas del habla de un usuario particular, o a las acciones a las que se ha referido con anterioridad). Sin embargo, no existe una estrategia fija para proceder a dicha adaptación, sino que las mismas no suelen guardar una relación entre sí. En este sentido, cada una de ellas tiene en cuenta distintas fuentes de información, la cual es tratada de manera diferente en función de las características de la adaptación buscada. Teniendo en cuenta lo anterior, las contribuciones principales de esta Tesis son las siguientes: Definición de un marco de contextualización. Proponemos un criterio unificador que pueda cubrir cualquier estrategia de adaptación del comportamiento de un sistema de diálogo a las condiciones de la interacción (esto es, el contexto de la misma). En nuestra definición teórica del marco de contextualización consideramos el contexto del sistema como todas aquellas fuentes de variabilidad presentes en cualquier instante de la interacción, ya estén relacionadas con el entorno en el que tiene lugar la interacción, ya dependan del agente humano que se dirige al sistema en cada momento. Nuestra propuesta se basa en tres aspectos que cualquier estrategia de contextualización debería cumplir: plasticidad (es decir, el sistema ha de ser capaz de modificar su comportamiento de la manera más proactiva posible, teniendo en cuenta las condiciones en las que tiene lugar la interacción), adaptabilidad (esto es, el sistema ha de ser capaz de considerar la información oportuna en cada instante, ya dependa del entorno o del usuario, de tal manera que adecúe su comportamiento de manera eficaz a las condiciones mencionadas), y transparencia (que implica que el sistema ha de desarrollar las tareas relacionadas con la contextualización de tal manera que el usuario no perciba la manera en que dichas tareas se llevan a cabo, ni tampoco deba proporcionar al sistema con información adicional alguna). De manera adicional, incluiremos en el marco propuesto el aspecto de la generalidad: las características del marco de contextualización han de ser portables a cualquier sistema de diálogo, con independencia de la solución propuesta en los mismos para gestionar el diálogo. Una vez hemos definido las características de alto nivel de nuestro marco de contextualización, proponemos dos estrategias de aplicación del mismo a un sistema de diálogo hablado. Nos centraremos en dos aspectos de la interacción a adaptar: los modelos empleados en el reconocimiento de habla, y la incorporación de información específica de cada usuario en el flujo de diálogo. Uno de los módulos de un sistema de diálogo más susceptible de ser contextualizado es el sistema de reconocimiento de habla. Este módulo hace uso de varios modelos para generar una hipótesis de reconocimiento a partir de la señal de habla. En general, un sistema de reconocimiento emplea dos tipos de modelos: uno acústico (que modela cada uno de los fonemas considerados por el reconocedor) y uno lingüístico (que modela las secuencias de palabras que tienen sentido desde el punto de vista de la interacción). En este trabajo contextualizamos el modelo lingüístico del reconocedor de habla, de tal manera que tenga en cuenta la información proporcionada por el usuario, tanto en su intervención actual como en las previas. Estas intervenciones contienen información (semántica y/o discursiva) que puede contribuir a un mejor reconocimiento de las subsiguientes intervenciones del usuario. La estrategia de contextualización propuesta consiste en una adaptación dinámica del modelo de lenguaje empleado en el reconocedor de habla. Dicha adaptación se lleva a cabo mediante una interpolación lineal entre diferentes modelos. En lugar de entrenar los mejores pesos de interpolación, proponemos hacer los mismos dependientes de las condiciones actuales de cada diálogo. El propio sistema obtendrá estos pesos como función de la disponibilidad y relevancia de las diferentes fuentes de información disponibles, tales como los conceptos semánticos extraídos a partir de la intervención del usuario, o las acciones que el mismo desea ejecutar. Uno de los aspectos más comúnmente analizados en la investigación de la Interacción Persona-Máquina es la inclusión de las características específicas de cada usuario en las estructuras de información empleadas por el sistema. El objetivo es tener en cuenta los aspectos que diferencian a cada usuario, de tal manera que el sistema pueda ofrecer a cada uno de ellos el servicio más apropiado (o un mismo servicio, pero de la manera más adecuada a cada usuario). Podemos considerar esta estrategia como una contextualización dependiente del usuario. En este trabajo proponemos la definición de un modelo de usuario que contenga toda la información relativa a cada usuario, que pueda ser potencialmente utilizada por el sistema en un momento determinado de la interacción. En particular, analizaremos aquellas acciones que cada usuario decide ejecutar a lo largo de sus diálogos con el sistema. Nuestro objetivo es determinar cuáles de dichas acciones se convierten en las preferencias de cada usuario. La información de cada usuario quedará representada mediante un vector de características, cada una de las cuales tendrá asociado un valor de confianza. Con ambos elementos proponemos una definición probabilística de una preferencia de uso, como aquella acción cuya verosimilitud es mayor que la del resto de acciones solicitadas por el usuario. A fin de incluir la información dependiente de usuario en el flujo de diálogo, llevamos a cabo una modificación de las estructuras de información en las que se apoya el gestor de diálogo para recuperar información necesaria para resolver ciertos diálogos. En dicha modificación las preferencias de cada usuario pasarán a ser una fuente adicional de información contextual, que será tenida en cuenta por el sistema en aras de una interacción más eficiente (puesto que la nueva fuente de información contribuirá a reducir la necesidad del sistema de solicitar al usuario información adicional, dando lugar en consecuencia a una reducción del número de intervenciones necesarias para llevar a cabo una acción determinada). Para determinar los beneficios de las aplicaciones del marco de contextualización propuesto, llevamos a cabo una evaluación de un sistema de diálogo que incluye las estrategias mencionadas. Hemos recogido diversas métricas, tanto objetivas como subjetivas, que nos permiten determinar las mejoras aportadas por un sistema contextualizado en comparación con el sistema sin contextualizar. De igual manera, hemos recogido las opiniones de los participantes en la evaluación acerca de su percepción del comportamiento del sistema, y de su capacidad de adaptación a las condiciones concretas de cada interacción.
Resumo:
Models for prediction of oil content as percentage of dried weight in olive fruits were comput- ed through PLS regression on NIR spectra. Spectral preprocessing was carried out by apply- ing multiplicative signal correction (MSC), Sa vitzky–Golay algorithm, standard normal variate correction (SNV), and detrending (D) to NIR spectra. MSC was the preprocessing technique showing the best performance. Further reduction of variability was performed by applying the Wold method of orthogonal signal correction (OSC). The calibration model achieved a R 2 of 0.93, a SEPc of 1.42, and a RPD of 3.8. The R 2 obtained with the validation set remained 0.93, and the SEPc was 1.41.
Design and Simulation of Deep Nanometer SRAM Cells under Energy, Mismatch, and Radiation Constraints
Resumo:
La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.