968 resultados para Tightly-coupled


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We examine the possibility that glacial increase in the areal extent of reducing sediments might have changed the oceanic Cd inventory, thereby decoupling Cd from PO4. We suggest that the precipitation of Cd-sulfide in suboxic sediments is the single largest sink in the oceanic Cd budget and that the accumulation of authigenic Cd and U is tightly coupled to the organic carbon flux into the seafloor. Sediments from the Subantarctic Ocean and the Cape Basin (South Atlantic), where oxic conditions currently prevail, show high accumulation rates of authigenic Cd and U during glacial intervals associated with increased accumulation of organic carbon. These elemental enrichments attest to more reducing conditions in glacial sediments in response to an increased flux of organic carbon. A third core, overlain by Circumpolar Deep Water (CPDW) as are the other two cores but located south of the Antarctic Polar Front, shows an approximately inverse pattern to the Subantarctic record. The contrasting patterns to the north and south of the Antarctic Polar Front suggest that higher accumulation rates of Cd and U in Subantarctic sediments were driven primarily by increased productivity. This proposal is consistent with the hypothesis of glacial stage northward migration of the Antarctic Polar Front and its associated belt of high siliceous productivity. However, the increase in authigenic Cd and U glacial accumulation rates is higher than expected simply from a northward shift of the polar fronts, suggesting greater partitioning of organic carbon into the sediments during glacial intervals. Lower oxygen content of CPDW and higher organic carbon to biogenic silica rain rate ratio during glacial stages are possible causes. Higher glacial productivity in the Cape Basin record very likely reflects enhanced coastal up-welling in response to increased wind speeds. We suggest that higher productivity might have doubled the areal extent of suboxic sediments during the last glacial maximum. However, our calculations suggest low sensitivity of seawater Cd concentrations to glacial doubling of the extent of reducing sediments. The model suggests that during the last 250 kyr seawater Cd concentrations fluctuated only slightly, between high levels (about 0.66 nmol/kg) on glacial initiations and reaching lowest values (about 0.57 nmol/kg) during glacial maxima. The estimated 5% lower Cd content at the last glacial maximum relative to modern levels (0.60 nmol/kg) cannot explain the discordance between Cd and delta13C, such as observed in the Southern Ocean. This low sensitivity is consistent with foraminiferal data, suggesting minimal change in the glacial Cd mean oceanic content.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A continuous 3.5 Myr IRD record was produced from Ocean Drilling Program (ODP) Site 907. A timescale based on magnetic polarity chrons, oxygen isotope stratigraphy (for the last 1Myr) and orbital tuning was developed. The record documents a stepwise inception of large-scale glacial cycles in the Nordic Seas region, the first being a marked expansion of the Greenland ice sheet at 3.3 Ma. A second step occurred at 2.74 Ma by an expansion of large scale ice sheets in the Northern Hemisphere. Ice sheet variability around the Nordic Seas was tightly coupled to global ice volume over the past 3.3 Myr. Between 3 and 1 Ma, most of the variance of the IRD signal is in the 41 kyr band, whereas the last 1 Myr is characterized by stronger 100 kyr variance. The Gamma Ray Porosity Evaluator (GRAPE) density record is closely linked with IRD variations and documents sub orbital variability resembling the late Quaternary Heinrich/Bond cycles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report new 187Os/186Os data and Re and Os concentrations in metalliferous sediments from the Pacific to construct a composite Os isotope seawater evolution curve over the past 80 m.y. Analyses of four samples of upper Cretaceous age yield 187Os/186Os values of between 3 and 6.5 and 187Re/186Os values below 55. Mass balance calculations indicate that the pronounced minimum of about 2 in the Os isotope ratio of seawater at the K-T boundary probably reflects the enormous input of cosmogenic material into the oceans by the K-T impactor(s). Following a rapid recovery to 187Os/186Os of 3.5 at 63 Ma, data for the early and middle part of the Cenozoic show an increase in 187Os/186Os to about 6 at 15 Ma. Variations in the isotopic composition of leachable Os from slowly accumulating metalliferous sediments show large fluctuations over short time spans. In contrast, analyses of rapidly accumulating metalliferous carbonates do not exhibit the large oscillations observed in the pelagic clay leach data. These results together with sediment leaching experiments indicate that dissolution of non-hydrogenous Os can occur during the hydrogen peroxide leach and demonstrate that Os data from pelagic clay leachates do not always reflect the Os isotopic composition of seawater. New data for the late Cenozoic further substantiate the rapid increase in the 187Os/186Os of seawater during the past 15 Ma. We interpret the correlation between the marine Sr and Os isotope records during this time period as evidence that weathering within the drainage basin of the Ganges-Brahmaputra river system is responsible for driving seawater Sr and Os toward more radiogenic isotopic compositions. The positive correlation between 87Sr/86Sr and U concentration, the covariation of U and Re concentrations, and the high dissolved Re, U and Sr concentrations found in the Ganges-Brahmaputra river waters supports this interpretation. Accelerating uplift of many orogens worldwide over the past 15 Ma, especially during the last 5 Ma, could have contributed to the rapid increase in 187Os/186Os from 6 to 8.5 over the past 15 Ma. Prior to 15 Ma the marine Sr and Os record are not tightly coupled. The heterogeneous distribution of different lithologies within eroding terrains may play an important role in decoupling the supplies of radiogenic Os and Sr to the oceans and account for the periods of decoupling of the marine Sr and Os isotope records.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Corresponding millennial-scale climate changes have been reported from the North Atlantic region and from east Asia for the last glacial period on independent timescales only. To assess their degree of synchrony we suggest interpreting Greenland ice core dust parameters as proxies for the east Asian monsoon systems. This allows comparing North Atlantic and east Asian climate on the same timescale in high resolution ice core data without relative dating uncertainties. We find that during Dansgaard-Oeschger events North Atlantic region temperature and east Asian storminess were tightly coupled and changed synchronously within 5-10 years with no systematic lead or lag, thus providing instantaneous climatic feedback. The tight link between North Atlantic and east Asian glacial climate could have amplified changes in the northern polar cell to larger scales. We further find evidence for an early onset of a Younger Dryas-like event in continental Asia, which gives evidence for heterogeneous climate change within east Asia during the last deglaciation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Variations in the strength of coastal upwelling in the South East Atlantic Ocean and summer monsoonal rains over South Africa are controlled by the regional atmospheric circulation regime. Although information about these parameters exists for the last glacial period, little detailed information exists for older time periods. New information from ODP Site 1085 for Marine Isotope Stages (MIS) 12-10 shows that glacial-interglacial productivity trends linked to upwelling variability followed a pattern similar to the last glacial cycle, with maximums shortly before glacial maxima, and minimums shortly before glacial terminations. During the MIS-11/10 transition, several periodic oscillations in productivity and monsoonal proxies are best explained by southwards shifts in the southern sub-tropical high-pressure cells followed by abrupt northwards shifts. Comparison to coeval sea-surface temperature measurements suggests that these monsoonal cycles were tightly coupled to anti-phased hemispheric climate change, with an intensified summer monsoon during periods of Northern (Southern) Hemisphere cooling (warming). The timing of these events suggests a pacing by insolation over precession periods. A lack of similar regional circulation shifts during the MIS-13/12 transition is likely due to the large equatorwards shift in the tropical convection zone that occurred during this extreme glaciation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report a near-continuous, stable isotopic record for the Pliocene-Pleistocene (4.8 to 0.8 Ma) from Ocean Drilling Program Site 704 in the sub-Antarctic South Atlantic (47°S, 7°E). During the early to middle Pliocene (4.8 to 3.2 Ma), variation in delta18O was less than ~0.5 per mil, and absolute values were generally less than those of the Holocene. These results indicate some warming and minor deglaciation of Antarctica during intervals of the Pliocene but are inconsistent with scenarios calling for major warming and deglaciation of the Antarctic ice sheet. The climate System operated within relatively narrow limits prior to ~3.2 Ma, and the Antarctic cryosphere probably did not fluctuate on a large scale until the late Pliocene. Benthic oxygen isotopic values exceeded 3 per mil for the first time at 3.16 Ma. The amplitude and mean of the delta18O signal increased at 2.7 Ma, suggesting a shift in climate mode during the latest Gauss. The greatest delta18O values of the Gaus anti Gilbert chrons occurred at ~2.6 Ma, just below a hiatus that removed the interval from ~2.6 to 2.3 Ma in Site 704. These results agree with those from Subantarctic Site 514, which suggest that the latest Gauss (2.68 to 2.47 Ma) was the time of greatest change in Neogene climate in the northern Antarctic and Subanthtic regions. During this period, surface water cooled as the Polar Front Zone (PFZ) migrated north and perennial sea ice Cover expanded into the Subantarctic region. Antarctic ice volume increased and the ventilation rate of Southern Ocean deep water decreased during glacial events after 2.7 Ma. We suggest that these changes in the Southern Ocean were related to a gradual lowering of sea level and a reduction in the flux of North Atlantic Deep Water (NADW) with the Initiation of ice growth in the northern hemisphere. The early Matuyama Chron (~ 2.3 to 1.7 Ma) was marked by relatively warm climates in the Southern Ocean except for strong glacial events associated with isotopic stages 82 (2.027 Ma), 78 (1.941 Ma), and 70 (1.782 Ma). At 1.67 Ma (stage 65/64 transition), surface waters cooled as the PFZ migrated equatorward and oscillated about a far northerly position for a prolonged interval between 1.67 and 1.5 Ma (stages 65 to 57). Beginning at ~1.42 Ma (stage 52), all parameters (delta18O, delta13C, %opal, %CaCO3) in Hole 704 become highly correlated with each other and display a very strong 41-kyr cyclicity. This increase in the importance of the 41-kyr cycle is attributed to an increase in the amplitude of the Earth's obliquity cycle that was likely reinforced by increased glacial suppression of NADW, which may explain the tightly coupled response that developed between the Southern Ocean and the North Atlantic beginning at ~1.42 Ma (stage 52).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Progressive ocean acidification due to anthropogenic CO2 emissions will alter marine ecosytem processes. Calcifying organisms might be particularly vulnerable to these alterations in the speciation of the marine carbonate system. While previous research efforts have mainly focused on external dissolution of shells in seawater under saturated with respect to calcium carbonate, the internal shell interface might be more vulnerable to acidification. In the case of the blue mussel Mytilus edulis, high body fluid pCO2 causes low pH and low carbonate concentrations in the extrapallial fluid, which is in direct contact with the inner shell surface. In order to test whether elevated seawater pCO2 impacts calcification and inner shell surface integrity we exposed Baltic M. edulis to four different seawater pCO2 (39, 142, 240, 405 Pa) and two food algae (310-350 cells mL-1 vs. 1600-2000 cells mL-1) concentrations for a period of seven weeks during winter (5°C). We found that low food algae concentrations and high pCO2 values each significantly decreased shell length growth. Internal shell surface corrosion of nacreous ( = aragonite) layers was documented via stereomicroscopy and SEM at the two highest pCO2 treatments in the high food group, while it was found in all treatments in the low food group. Both factors, food and pCO2, significantly influenced the magnitude of inner shell surface dissolution. Our findings illustrate for the first time that integrity of inner shell surfaces is tightly coupled to the animals' energy budget under conditions of CO2 stress. It is likely that under food limited conditions, energy is allocated to more vital processes (e.g. somatic mass maintenance) instead of shell conservation. It is evident from our results that mussels exert significant biological control over the structural integrity of their inner shell surfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Arctic vegetation is characterized by high spatial variability in plant functional type (PFT) composition and gross primary productivity (P). Despite this variability, the two main drivers of P in sub-Arctic tundra are leaf area index (LT) and total foliar nitrogen (NT). LT and NT have been shown to be tightly coupled across PFTs in sub-Arctic tundra vegetation, which simplifies up-scaling by allowing quantification of the main drivers of P from remotely sensed LT. Our objective was to test the LT-NT relationship across multiple Arctic latitudes and to assess LT as a predictor of P for the pan-Arctic. Including PFT-specific parameters in models of LT-NT coupling provided only incremental improvements in model fit, but significant improvements were gained from including site-specific parameters. The degree of curvature in the LT-NT relationship, controlled by a fitted canopy nitrogen extinction co-efficient, was negatively related to average levels of diffuse radiation at a site. This is consistent with theoretical predictions of more uniform vertical canopy N distributions under diffuse light conditions. Higher latitude sites had higher average leaf N content by mass (NM), and we show for the first time that LT-NT coupling is achieved across latitudes via canopy-scale trade-offs between NM and leaf mass per unit leaf area (LM). Site-specific parameters provided small but significant improvements in models of P based on LT and moss cover. Our results suggest that differences in LT-NT coupling between sites could be used to improve pan-Arctic models of P and we provide unique evidence that prevailing radiation conditions can significantly affect N allocation over regional scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario?the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

(ENG) IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses (SPA)DPSA (Metodologías Integradas de Análisis Determinista-Probabilista de Seguridad) es un conjunto de métodos que utilizan métodos probabilistas y deterministas estrechamente acoplados para abordar las respectivas fuentes de incertidumbre, permitiendo la toma de decisiones Informada por el Riesgo de forma consistente. El punto de inicio del marco IDPSA es que la justificación de seguridad debe estar basada en el acoplamiento entre consideraciones deterministas (consecuencias) y probabilistas (frecuencia) para abordar la interacción mutua entre perturbaciones estocásticas (como por ejemplo fallos de los equipos, acciones humanas, fenómenos físicos estocásticos) y la respuesta determinista de la planta (como por ejemplo los transitorios). Este artículo da una visión general de algunos métodos IDSPA así como posibles aplicaciones al análisis de seguridad de los PWR.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mutation of Bruton’s tyrosine kinase (Btk) impairs B cell maturation and function and results in a clinical phenotype of X-linked agammaglobulinemia. Activation of Btk correlates with an increase in the phosphorylation of two regulatory Btk tyrosine residues. Y551 (site 1) within the Src homology type 1 (SH1) domain is transphosphorylated by the Src family tyrosine kinases. Y223 (site 2) is an autophosphorylation site within the Btk SH3 domain. Polyclonal, phosphopeptide-specific antibodies were developed to evaluate the phosphorylation of Btk sites 1 and 2. Crosslinking of the B cell antigen receptor (BCR) or the mast cell Fcɛ receptor, or interleukin 5 receptor stimulation each induced rapid phosphorylation at Btk sites 1 and 2 in a tightly coupled manner. Btk molecules were singly and doubly tyrosine-phosphorylated. Phosphorylated Btk comprised only a small fraction (≤5%) of the total pool of Btk molecules in the BCR-activated B cells. Increased dosage of Lyn in B cells augmented BCR-induced phosphorylation at both sites. Kinetic analysis supports a sequential activation mechanism in which individual Btk molecules undergo serial transphosphorylation (site 1) then autophosphorylation (site 2), followed by successive dephosphorylation of site 1 then site 2. The phosphorylation of conserved tyrosine residues within structurally related Tec family kinases is likely to regulate their activation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A cell of the bacterium Escherichia coli was tethered covalently to a glass coverslip by a single flagellum, and its rotation was stopped by using optical tweezers. The tweezers acted directly on the cell body or indirectly, via a trapped polystyrene bead. The torque generated by the flagellar motor was determined by measuring the displacement of the laser beam on a quadrant photodiode. The coverslip was mounted on a computer-controlled piezo-electric stage that moved the tether point in a circle around the center of the trap so that the speed of rotation of the motor could be varied. The motor generated ≈4500 pN nm of torque at all angles, regardless of whether it was stalled, allowed to rotate very slowly forwards, or driven very slowly backwards. This argues against models of motor function in which rotation is tightly coupled to proton transit and back-transport of protons is severely limited.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coupling of cerebral blood flow (CBF) and cerebral metabolic rate for oxygen (CMRO2) in physiologically activated brain states remains the subject of debates. Recently it was suggested that CBF is tightly coupled to oxidative metabolism in a nonlinear fashion. As part of this hypothesis, mathematical models of oxygen delivery to the brain have been described in which disproportionately large increases in CBF are necessary to sustain even small increases in CMRO2 during activation. We have explored the coupling of CBF and oxygen delivery by using two complementary methods. First, a more complex mathematical model was tested that differs from those recently described in that no assumptions were made regarding tissue oxygen level. Second, [15O] water CBF positron emission tomography (PET) studies in nine healthy subjects were conducted during states of visual activation and hypoxia to examine the relationship of CBF and oxygen delivery. In contrast to previous reports, our model showed adequate tissue levels of oxygen could be maintained without the need for increased CBF or oxygen delivery. Similarly, the PET studies demonstrated that the regional increase in CBF during visual activation was not affected by hypoxia. These findings strongly indicate that the increase in CBF associated with physiological activation is regulated by factors other than local requirements in oxygen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Degradation of proteins that, because of improper or suboptimal processing, are retained in the endoplasmic reticulum (ER) involves retrotranslocation to reach the cytosolic ubiquitin-proteasome machinery. We found that substrates of this pathway, the precursor of human asialoglycoprotein receptor H2a and free heavy chains of murine class I major histocompatibility complex (MHC), accumulate in a novel preGolgi compartment that is adjacent to but not overlapping with the centrosome, the Golgi complex, and the ER-to-Golgi intermediate compartment (ERGIC). On its way to degradation, H2a associated increasingly after synthesis with the ER translocon Sec61. Nevertheless, it remained in the secretory pathway upon proteasomal inhibition, suggesting that its retrotranslocation must be tightly coupled to the degradation process. In the presence of proteasomal inhibitors, the ER chaperones calreticulin and calnexin, but not BiP, PDI, or glycoprotein glucosyltransferase, concentrate in the subcellular region of the novel compartment. The “quality control” compartment is possibly a subcompartment of the ER. It depends on microtubules but is insensitive to brefeldin A. We discuss the possibility that it is also the site for concentration and retrotranslocation of proteins that, like the mutant cystic fibrosis transmembrane conductance regulator, are transported to the cytosol, where they form large aggregates, the “aggresomes.”