917 resultados para BATCH INJECTION ANALYSIS
Resumo:
DNA extraction was carried out as described on the MICROBIS project pages (http://icomm.mbl.edu/microbis ) using a commercially available extraction kit. We amplified the hypervariable regions V4-V6 of archaeal and bacterial 16S rRNA genes using PCR and several sets of forward and reverse primers (http://vamps.mbl.edu/resources/primers.php). Massively parallel tag sequencing of the PCR products was carried out on a 454 Life Sciences GS FLX sequencer at Marine Biological Laboratory, Woods Hole, MA, following the same experimental conditions for all samples. Sequence reads were submitted to a rigorous quality control procedure based on mothur v30 (doi:10.1128/AEM.01541-09) including denoising of the flow grams using an algorithm based on PyroNoise (doi:10.1038/nmeth.1361), removal of PCR errors and a chimera check using uchime (doi:10.1093/bioinformatics/btr381). The reads were taxonomically assigned according to the SILVA taxonomy (SSURef v119, 07-2014; doi:10.1093/nar/gks1219) implemented in mothur and clustered at 98% ribosomal RNA gene V4-V6 sequence identity. V4-V6 amplicon sequence abundance tables were standardized to account for unequal sampling effort using 1000 (Archaea) and 2300 (Bacteria) randomly chosen sequences without replacement using mothur and then used to calculate inverse Simpson diversity indices and Chao1 richness (doi:10.2307/4615964). Bray-Curtis dissimilarities (doi:10.2307/1942268) between all samples were calculated and used for 2-dimensional non metric multidimensional scaling (NMDS) ordinations with 20 random starts (doi:10.1007/BF02289694). Stress values below 0.2 indicated that the multidimensional dataset was well represented by the 2D ordination. NMDS ordinations were compared and tested using Procrustes correlation analysis (doi:10.1007/BF02291478). All analyses were carried out with the R statistical environment and the packages vegan (available at: http://cran.r-project.org/package=vegan), labdsv (available at: http://cran.r-project.org/package=labdsv), as well as with custom R scripts. Operational taxonomic units at 98% sequence identity (OTU0.03) that occurred only once in the whole dataset were termed absolute single sequence OTUs (SSOabs; doi:10.1038/ismej.2011.132). OTU0.03 sequences that occurred only once in at least one sample, but may occur more often in other samples were termed relative single sequence OTUs (SSOrel). SSOrel are particularly interesting for community ecology, since they comprise rare organisms that might become abundant when conditions change.16S rRNA amplicons and metagenomic reads have been stored in the sequence read archive under SRA project accession number SRP042162.
Resumo:
Since the Three Mile Island accident, an important focus of pressurized water reactor (PWR) transient analyses has been a small-break loss-of-coolant accident (SBLOCA). In 2002, the discovery of thinning of the vessel head wall at the Davis Besse nuclear power plant reactor indicated the possibility of an SBLOCA in the upper head of the reactor vessel as a result of circumferential cracking of a control rod drive mechanism penetration nozzle - which has cast even greater importance on the study of SBLOCAs. Several experimental tests have been performed at the Large Scale Test Facility to simulate the behavior of a PWR during an upper-head SBLOCA. The last of these tests, Organisation for Economic Co-operation and Development Nuclear Energy Agency Rig of Safety Assessment (OECD/NEA ROSA) Test 6.1, was performed in 2005. This test was simulated with the TRACE 5.0 code, and good agreement with the experimental results was obtained. Additionally, a broad analysis of an upper-head SBLOCA with high-pressure safety injection failed in a Westinghouse PWR was performed taking into account different accident management actions and conditions in order to check their suitability. This issue has been analyzed also in the framework of the OECD/NEA ROSA project and the Code Applications and Maintenance Program (CAMP). The main conclusion is that the current emergency operating procedures for Westinghouse reactor design are adequate for these kinds of sequences, and they do not need to be modified.
Resumo:
The optical and radio-frequency spectra of a monolithic master-oscillator power-amplifier emitting at 1.5 ?m have been analyzed in a wide range of steady-state injection conditions. The analysis of the spectral maps reveals that, under low injection current of the master oscillator, the device operates in two essentially different operation modes depending on the current injected into the amplifier section. The regular operation mode with predominance of the master oscillator alternates with lasing of the compound cavity modes allowed by the residual reflectance of the amplifier front facet. The quasi-periodic occurrence of these two regimes as a function of the amplifier current has been consistently interpreted in terms of a thermally tuned competition between the modes of the master oscillator and the compound cavity modes.
Resumo:
Esta Tesis presenta un estudio sobre el comportamiento vibroacústico de estructuras espaciales que incluyen capas de aire delgadas, así como sobre su modelización numérica. Las capas de aire pueden constituir un elemento fundamental en estos sistemas, como paneles solares plegados, que se consideran el caso de estudio en este trabajo. Para evaluar la influencia de las capas de aire en la respuesta dinámica del sistema se presenta el uso de modelos unidimensionales. La modelización de estos sistemas se estudia para los rangos de baja y alta frecuencia. En el rango de baja frecuencia se propone un conjunto de estrategias de simulación basadas en técnicas numéricas que se utilizan habitualmente en la industria aeroespacial para facilitar la aplicación de los resultados de la Tesis en los modelos numéricos actuales. Los resultados muestran el importante papel de las capas de aire en la respuesta del sistema. El uso de modelos basados en elementos finitos o de contorno para estos elementos proporciona resultados equivalentes aunque la aplicabilidad de estos últimos puede estar condicionada por la geometría del problema. Se estudia asimismo el uso del Análisis Estadístico de la Energía (SEA) para estos elementos. Una de las estrategias de simulación propuestas, que incluye una formulación energética para el aire que rodea a la estructura, se propone como estimador preliminar de la respuesta del sistema y sus frecuencias propias. Para el rango de alta frecuencia, se estudia la influencia de la definición del propio modelo SEA. Se presenta el uso de técnicas de reducción para determinar una matriz de pérdidas SEA reducida para definiciones incompletas del sistema (si algún elemento que interactúa con el resto no se incluye en el modelo). Esta nueva matriz tiene en cuenta la contribución de las subestructuras que no se consideran parte del modelo y que suelen ignorarse en el procedimiento habitual para reducir el tamaño del mismo. Esta matriz permite también analizar sistemas que incluyen algún componente con problemas de accesibilidad para medir su respuesta. Respecto a la determinación de los factores de pérdidas del sistema, se presenta una metodología que permite abordar casos en los que el método usual, el Método de Inyección de Potencia (PIM), no puede usarse. Se presenta un conjunto de métodos basados en la técnicas de optimización y de actualización de modelos para casos en los que no se puede medir la respuesta de todos los elementos del sistema y también para casos en los que no todos los elementos pueden ser excitados, abarcando un conjunto de casos más amplio que el abordable con el PIM. Para ambos rangos de frecuencia se presentan diferentes casos de análisis: modelos numéricos para validar los métodos propuestos y un panel solar plegado como caso experimental que pone de manifiesto la aplicación práctica de los métodos presentados en la Tesis. ABSTRACT This Thesis presents an study on the vibro-acoustic behaviour of spacecraft structures with thin air layers and their numerical modelling. The air layers can play a key role in these systems as solar wings in folded configuration that constitute the study case for this Thesis. A method based on one-dimensional models is presented to assess the influence of the air layers in the dynamic response of the system. The modelling of such systems is studied for low and high frequency ranges. In the low frequency range a set of modelling strategies are proposed based on numerical techniques used in the industry to facilitate the application of the results in the current numerical models. Results show the active role of the air layers in the system response and their great level of influence. The modelling of these elements by means of Finite Elements (FE) and Boundary Elements (BE) provide equivalent results although the applicability of BE models can be conditioned by the geometry of the problem. The use of Statistical Energy Analysis (SEA) for these systems is also presented. Good results on the system response are found for models involving SEA beyond the usual applicability limit. A simulation strategy, involving energetic formulation for the surrounding fluid is proposed as fast preliminary approach for the system response and the coupled eigenfrequencies. For the high frequency range, the influence of the definition of the SEA model is presented. Reduction techniques are used to determine a Reduced SEA Loss Matrix if the system definition is not complete and some elements, which interact with the rest, are not included. This new matrix takes into account the contribution of the subsystems not considered that are neglected in the usual approach for decreasing the size of the model. It also allows the analysis of systems with accessibility restrictions on some element in order to measure its response. Regarding the determination of the loss factors of a system, a methodology is presented for cases in which the usual Power Injection Method (PIM) can not be applied. A set of methods are presented for cases in which not all the subsystem responses can be measured or not all the subsystems can be excited, as solar wings in folded configuration. These methods, based on error minimising and model updating techniques can be used to calculate the system loss factors in a set of cases wider than the PIM’s. For both frequency ranges, different test problems are analysed: Numerical models are studied to validate the methods proposed; an experimental case consisting in an actual solar wing is studied on both frequency ranges to highlight the industrial application of the new methods presented in the Thesis.
Resumo:
The three-dimensional wall-bounded open cavity may be considered as a simplified geometry found in industrial applications such as leading gear or slotted flats on the airplane. Understanding the three-dimensional complex flow structure that surrounds this particular geometry is therefore of major industrial interest. At the light of the remarkable former investigations in this kind of flows, enough evidences suggest that the lateral walls have a great influence on the flow features and hence on their instability modes. Nevertheless, even though there is a large body of literature on cavity flows, most of them are based on the assumption that the flow is two-dimensional and spanwise-periodic. The flow over realistic open cavity should be considered. This thesis presents an investigation of three-dimensional wall-bounded open cavity with geometric ratio 6:2:1. To this aim, three-dimensional Direct Numerical Simulation (DNS) and global linear instability have been performed. Linear instability analysis reveals that the onset of the first instability in this open cavity is around Recr 1080. The three-dimensional shear layer mode with a complex structure is shown to be the most unstable mode. I t is noteworthy that the flow pattern of this high-frequency shear layer mode is similar to the observed unstable oscillations in supercritical unstable case. DNS of the cavity flow carried out at different Reynolds number from steady state until a nonlinear saturated state is obtained. The comparison of time histories of kinetic energy presents a clearly dominant energetic mode which shifts between low-frequency and highfrequency oscillation. A complete flow patterns from subcritical cases to supercritical case has been put in evidence. The flow structure at the supercritical case Re=1100 resembles typical wake-shedding instability oscillations with a lateral motion existed in the subcritical cases. Also, This flow pattern is similar to the observations in experiments. In order to validate the linear instability analysis results, the topology of the composite flow fields reconstructed by linear superposition of a three-dimensional base flow and its leading three-dimensional global eigenmodes has been studied. The instantaneous wall streamlines of those composited flows display distinguish influence region of each eigenmode. Attention has been focused on the leading high-frequency shear layer mode; the composite flow fields have been fully recognized with respect to the downstream wave shedding. The three-dimensional shear layer mode is shown to give rise to a typical wake-shedding instability with a lateral motions occurring downstream which is in good agreement with the experiment results. Moreover, the spanwise-periodic, open cavity with the same length to depth ratio has been also studied. The most unstable linear mode is different from the real three-dimensional cavity flow, because of the existence of the side walls. Structure sensitivity of the unstable global mode is analyzed in the flow control context. The adjoint-based sensitivity analysis has been employed to localized the receptivity region, where the flow is more sensible to momentum forcing and mass injection. Because of the non-normality of the linearized Navier-Stokes equations, the direct and adjoint field has a large spatial separation. The strongest sensitivity region is locate in the upstream lip of the three-dimensional cavity. This numerical finding is in agreement with experimental observations. Finally, a prototype of passive flow control strategy is applied.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The objective of the current study was to assess how closely batch cultures (BC) of rumen microorganisms can mimic the dietary differences in fermentation characteristics found in the rumen, and to analyse changes in bacterial diversity over the in vitro incubation period. Four ruminally and duodenally cannulated sheep were fed four diets having forage : concentrate ratios (FCR) of 70 : 30 or 30 : 70, with either alfalfa hay or grass hay as forage. Rumen fluid from each sheep was used to inoculate BC containing the same diet fed to the donor sheep, and the main rumen fermentation parameters were determined after 24 h of incubation. There were differences between BC and sheep in the magnitude of most measured parameters, but BC detected differences among diets due to forage type similar to those found in sheep. In contrast, BC did not reproduce the dietary differences due to FCR found in sheep for pH, degradability of neutral detergent fibre and total volatile fatty acid (VFA) concentrations. There were differences between systems in the magnitude of most determined parameters and BC showed higher pH values and NH3–N concentrations, but lower fibre degradability and VFA and lactate concentrations compared with sheep. There were significant relationships between in vivo and in vitro values for molar proportions of acetate, propionate and butyrate, and the acetate : propionate ratio. The automated ribosomal intergenic spacer analysis (ARISA) of 16S ribosomal deoxyribonucleic acid showed that FCR had no effect on bacterial diversity either in the sheep rumen fluid used as inoculum (IN) or in BC samples. In contrast, bacterial diversity was greater with alfalfa hay diets than those with grass hay in the IN, but was unaffected by forage type in the BC. Similarity index between the bacterial communities in the inocula and those in the BC ranged from 67·2 to 74·7%, and was unaffected by diet characteristics. Bacterial diversity was lower in BC than in the inocula with 14 peaks out of a total of 181 detected in the ARISA electropherograms never appearing in BC samples, which suggests that incubation conditions in the BC may have caused a selection of some bacterial strains. However, each BC sample showed the highest similarity index with its corresponding rumen IN, which highlights the importance of using rumen fluid from donors fed a diet similar to that being incubated in BC when conducting in vitro experiments.
Resumo:
This thesis is the result of a project whose objective has been to develop and deploy a dashboard for sentiment analysis of football in Twitter based on web components and D3.js. To do so, a visualisation server has been developed in order to present the data obtained from Twitter and analysed with Senpy. This visualisation server has been developed with Polymer web components and D3.js. Data mining has been done with a pipeline between Twitter, Senpy and ElasticSearch. Luigi have been used in this process because helps building complex pipelines of batch jobs, so it has analysed all tweets and stored them in ElasticSearch. To continue, D3.js has been used to create interactive widgets that make data easily accessible, this widgets will allow the user to interact with them and �filter the most interesting data for him. Polymer web components have been used to make this dashboard according to Google's material design and be able to show dynamic data in widgets. As a result, this project will allow an extensive analysis of the social network, pointing out the influence of players and teams and the emotions and sentiments that emerge in a lapse of time.
Resumo:
Dendritic cells (DCs) instruct and activate a naive immune system to mount a response toward foreign proteins. Therefore, it has been hypothesized that an ideal vaccine strategy would be to directly introduce genes encoding antigens into DCs. To test this strategy quantitatively, we have compared the immune response elicited by a genetically transfected DC line to that induced by a fibroblast line, or standard genetic immunization. We observe that a single injection of 500–1,000 transfected DCs can produce a response comparable to that of standard genetic immunization, whereas fibroblasts, with up to 50-fold greater transfection efficiency, were less potent. We conclude that transfection of a small number of DCs is sufficient to initiate a wide variety of immune responses. These results indicate that targeting genes to DCs will be important for controlling and augmenting the immunological outcome in genetic immunization.
Resumo:
The enzymes cyclooxygenase-1 and cyclooxygenase-2 (COX-1 and COX-2) catalyze the conversion of arachidonic acid to prostaglandin (PG) H2, the precursor of PGs and thromboxane. These lipid mediators play important roles in inflammation and pain and in normal physiological functions. While there are abundant data indicating that the inducible isoform, COX-2, is important in inflammation and pain, the constitutively expressed isoform, COX-1, has also been suggested to play a role in inflammatory processes. To address the latter question pharmacologically, we used a highly selective COX-1 inhibitor, SC-560 (COX-1 IC50 = 0.009 μM; COX-2 IC50 = 6.3 μM). SC-560 inhibited COX-1-derived platelet thromboxane B2, gastric PGE2, and dermal PGE2 production, indicating that it was orally active, but did not inhibit COX-2-derived PGs in the lipopolysaccharide-induced rat air pouch. Therapeutic or prophylactic administration of SC-560 in the rat carrageenan footpad model did not affect acute inflammation or hyperalgesia at doses that markedly inhibited in vivo COX-1 activity. By contrast, celecoxib, a selective COX-2 inhibitor, was anti-inflammatory and analgesic in this model. Paradoxically, both SC-560 and celecoxib reduced paw PGs to equivalent levels. Increased levels of PGs were found in the cerebrospinal fluid after carrageenan injection and were markedly reduced by celecoxib, but were not affected by SC-560. These results suggest that, in addition to the role of peripherally produced PGs, there is a critical, centrally mediated neurological component to inflammatory pain that is mediated at least in part by COX-2.
Resumo:
Pain is a unified experience composed of interacting discriminative, affective-motivational, and cognitive components, each of which is mediated and modulated through forebrain mechanisms acting at spinal, brainstem, and cerebral levels. The size of the human forebrain in relation to the spinal cord gives anatomical emphasis to forebrain control over nociceptive processing. Human forebrain pathology can cause pain without the activation of nociceptors. Functional imaging of the normal human brain with positron emission tomography (PET) shows synaptically induced increases in regional cerebral blood flow (rCBF) in several regions specifically during pain. We have examined the variables of gender, type of noxious stimulus, and the origin of nociceptive input as potential determinants of the pattern and intensity of rCBF responses. The structures most consistently activated across genders and during contact heat pain, cold pain, cutaneous laser pain or intramuscular pain were the contralateral insula and anterior cingulate cortex, the bilateral thalamus and premotor cortex, and the cerebellar vermis. These regions are commonly activated in PET studies of pain conducted by other investigators, and the intensity of the brain rCBF response correlates parametrically with perceived pain intensity. To complement the human studies, we developed an animal model for investigating stimulus-induced rCBF responses in the rat. In accord with behavioral measures and the results of human PET, there is a progressive and selective activation of somatosensory and limbic system structures in the brain and brainstem following the subcutaneous injection of formalin. The animal model and human PET studies should be mutually reinforcing and thus facilitate progress in understanding forebrain mechanisms of normal and pathological pain.
Resumo:
Squid synaptotagmin (Syt) cDNA, including its open reading frame, was cloned and polyclonal antibodies were obtained in rabbits immunized with glutathione S-transferase (GST)-Syt-C2A. Binding assays indicated that the antibody, anti-Syt-C2A, recognized squid Syt and inhibited the Ca(2+)-dependent phospholipid binding to the C2A domain. This antibody, when injected into the preterminal at the squid giant synapse, blocked transmitter release in a manner similar to that previously reported for the presynaptic injection of members of the inositol high-polyphosphate series. The block was not accompanied by any change in the presynaptic action potential or the amplitude or voltage dependence of the presynaptic Ca2+ current. The postsynaptic potential was rather insensitive to repetitive presynaptic stimulation, indicating a direct effect of the antibody on the transmitter release system. Following block of transmitter release, confocal microscopical analysis of the preterminal junction injected with rhodamine-conjugated anti-Syt-C2A demonstrated fluorescent spots at the inner surface of the presynaptic plasmalemma next to the active zones. Structural analysis of the same preparations demonstrated an accumulation of synaptic vesicles corresponding in size and distribution to the fluorescent spots demonstrated confocally. Together with the finding that such antibody prevents Ca2+ binding to a specific receptor in the C2A domain, these results indicate that Ca2+ triggers transmitter release by activating the C2A domain of Syt. We conclude that the C2A domain is directly related to the fusion of synaptic vesicles that results in transmitter release.
Resumo:
In this work, batch and dynamic adsorption tests are coupled for an accurate evaluation of CO2 adsorption performance for three different activated carbons obtained from olives stones by chemical activation followed by physical activation with CO2 at varying times, i.e. 20, 40 and 60 h. Kinetic and thermodynamic CO2 adsorption tests from simulated flue-gas at different temperature and CO2 pressure are carried out both in batch (a manometric equipment operating with pure CO2) and dynamic (a lab-scale fixed-bed column operating with CO2/N2 mixture) conditions. The textural characterization of the activated carbon samples shows a direct dependence of both micropore and ultramicropore volume on the activation time, hence AC60 has the higher contribution. The adsorption tests conducted at 273 and 293 K showed that, when CO2 pressure is lower than 0.3 bar, the lower the activation time the higher CO2 adsorption capacity and a ranking ωeq(AC20)>ωeq(AC40)>ωeq(AC60) can be exactly defined when T= 293 K. This result can be likely ascribed to a narrower pore size distribution of the AC20 sample, whose smaller pores are more effective for CO2 capture at higher temperature and lower CO2 pressure, the latter representing operating conditions of major interest for decarbonation of a flue-gas effluent. Moreover, the experimental results obtained from dynamic tests confirm the results derived from the batch tests in terms of CO2 adsorption capacity. It is important to highlight that the adsorption of N2 on the synthesized AC samples can be considered negligible. Finally, the importance of a proper analysis of characterization data and adsorption experimental results is highlighted for a correct assessment of CO2 removal performances of activated carbons at different CO2 pressure and operating temperature.
Resumo:
We have employed an inverse engineering strategy based on quantitative proteome analysis to identify changes in intracellular protein abundance that correlate with increased specific recombinant monoclonal antibody production (qMab) by engineered murine myeloma (NSO) cells. Four homogeneous NSO cell lines differing in qMab were isolated from a pool of primary transfectants. The proteome of each stably transfected cell line was analyzed at mid-exponential growth phase by two-dimensional gel electrophoresis (2D-PAGE) and individual protein spot volume data derived from digitized gel images were compared statistically. To identify changes in protein abundance associated with qMab clatasets were screened for proteins that exhibited either a linear correlation with cell line qMab or a conserved change in abundance specific only to the cell line with highest qMab. Several proteins with altered abundance were identified by mass spectrometry. Proteins exhibiting a significant increase in abundance with increasing qMab included molecular chaperones known to interact directly with nascent immunoglobulins during their folding and assembly (e.g., BiP, endoplasmin, protein disulfide isomerase). 2D-PAGE analysis showed that in all cell lines Mab light chain was more abundant than heavy chain, indicating that this is a likely prerequisite for efficient Mab production. In summary, these data reveal both the adaptive responses and molecular mechanisms enabling mammalian cells in culture to achieve high-level recombinant monoclonal antibody production. (C) 2004 Wiley Periodicals, Inc.