957 resultados para Tanks-in-series Model
Resumo:
Biofilms in milk cooling tanks compromise product quality even on farms. Due to the lack of studies of this topic, this study evaluated the microbiological conditions of raw milk cooling tanks on farms and characterized the microorganisms isolated from these tanks. Samples were wiped off with sterile swabs from seven milk cooling tanks in three different points in each tank. Mesophiles and psychrotrophic counts were performed in all samples. The isolation of Pseudomonas spp., Bacillus cereus and atypical colonies formed on selective media were also performed, totalizing 297 isolates. All isolates were tested for protease and lipase production and biofilm formation. Of the total isolates, 62.9% produced protease, 55.9% produced lipase, and 50.2% produced biofilm. The most widespread genus inside the milk cooling tank was Pseudomonas since it was not possible to associate this contamination with a single sampling point in the equipment. High counts of microorganisms were found in some cooling tanks, indicating poor cleaning of the equipment and providing strong evidences of microbial biofilm presence. Moreover, it is worth mentioning the milk potential contamination with both microbial cells and their degrading enzymes, which compromises milk quality.
Resumo:
Within the framework of state security policy, the focus of this dissertation are the relations between how new security threats are perceived and the policy planning and bureaucratic implementation that are designed to address them. In addition, this thesis explores and studies some of the inertias that might exist in the core of the state apparatus as it addresses new threats and how these could be better managed. The dissertation is built on five thematic and interrelated articles highlighting different aspects of when new significant national security threats are detected by different governments until the threats on the policy planning side translate into protective measures within the society. The timeline differs widely between different countries and some key aspects of this process are also studied. One focus concerns mechanisms for adaptability within the Intelligence Community, another on the policy planning process within the Cabinet Offices/National Security Councils and the third focus is on the planning process and how policy is implemented within the bureaucracy. The issue of policy transfer is also analysed, revealing that there is some imitation of innovation within governmental structures and policies, for example within the field of cyber defence. The main findings of the dissertation are that this context has built-in inertias and bureaucratic seams found in most government bureaucratic machineries. As much of the information and planning measures imply security classification of the transparency and internal debate on these issues, alternative assessments become limited. To remedy this situation, the thesis recommends ways to improve the decision-making system in order to streamline the processes involved in making these decisions. Another special focus of the thesis concerns the role of the public policy think tanks in the United States as an instrument of change in the country’s national security decision-making environment, which is viewed from the perspective as being a possible source of new ideas and innovation. The findings in this part are based on unique interviews data on how think tanks become successful and influence the policy debate in a country such as the United States. It appears clearly that in countries such as the United States think tanks smooth the decision making processes, and that this model with some adaptations also might be transferrable to other democratic countries.
Resumo:
Les immunoglobulines intraveineuses (IVIg) constituent une préparation polyclonale d’IgG isolée et regroupée à partir du plasma sanguin de multiples donneurs. Initialement utilisé comme traitement de remplacement chez les patients souffrant d’immunodéficience primaire ou secondaire, les IVIg sont maintenant largement utilisées dans le traitement de plusieurs conditions auto-immunes, allergiques ou inflammatoires à une dose élevée, dite immunomodulatrice. Différents mécanismes d’action ont été postulés au fil des années pour expliquer l’effet thérapeutique des IVIg dans les maladies auto-immunes et inflammatoires. Entre autre, un nombre grandissant de données issues de modèles expérimentaux chez l’animal et l’humain suggère que les IVIg induisent l’expansion et augmentent l’action suppressive des cellules T régulatrices (Tregs), par un mécanisme qui demeure encore inconnu. Également, les patients atteints de maladies auto-immunes ou inflammatoires présentent souvent un nombre abaissé de Tregs par rapport aux individus sains. Ainsi, une meilleure compréhension des mécanismes par lesquels les IVIg modulent les cellules T régulatrices est requise afin de permettre un usage plus rationnel de ce produit sanguin en tant qu’alternative thérapeutique dans le traitement des maladies auto-immunes et inflammatoires. Par le biais d’un modèle expérimental d’allergie respiratoire induite par un allergène, nous avons démontré que les IVIg diminuaient significativement l’inflammation au niveau des voies aériennes ce, en association avec une différenciation des Tregs à partir des cellules T non régulatrices du tissu pulmonaire. Nous avons également démontré qu’au sein de notre modèle expérimental, l’effet anti-inflammatoire des IVIg était dépendant des cellules dendritiques CD11c+ (CDs) pulmonaires, puisque cet effet pouvait être complètement reproduit par le transfert adoptif de CDs provenant de souris préalablement traitées par les IVIg. À cet effet, il est déjà établi que les IVIg peuvent moduler l’activation et les propriétés des CDs pour favoriser la tolérance immunitaire et que ces cellules seraient cruciales pour l’induction périphérique des Tregs. C’est pourquoi, nous avons cherché à mieux comprendre comment les IVIg exercent leur effet sur ces cellules. Pour la première fois, nous avons démontré que la fraction d’IgG riche en acide sialique (SA-IVIg) (constituant 2-5% de l’ensemble des IgG des donneurs) interagit avec un récepteur dendritique inhibiteur de type lectine C (DCIR) et active une cascade de signalement intracellulaire initiée par la phosphorylation du motif ITIM qui est responsable des changements observés en faveur de la tolérance immunitaire auprès des cellules dendritiques et des Tregs. L’activité anti-inflammatoire de la composante SA-IVIg a déjà été décrite dans des études antérieures, mais encore une fois le mécanisme par lequel ce traitement modifie la fonction des CDs n’a pas été établi. Nous avons finalement démontré que le récepteur DCIR facilite l’internalisation des molécules d’IgG liées au récepteur et que cette étape est cruciale pour permettre l’induction périphérique des Tregs. En tant que produit sanguin, les IVIg constitue un traitement précieux qui existe en quantité limitée. La caractérisation des mécanismes d’action des IVIg permettra une meilleure utilisation de ce traitement dans un vaste éventail de pathologies auto-immunes et inflammatoires.
Resumo:
The thesis deals with some of the non-linear Gaussian and non-Gaussian time models and mainly concentrated in studying the properties and application of a first order autoregressive process with Cauchy marginal distribution. In this thesis some of the non-linear Gaussian and non-Gaussian time series models and mainly concentrated in studying the properties and application of a order autoregressive process with Cauchy marginal distribution. Time series relating to prices, consumptions, money in circulation, bank deposits and bank clearing, sales and profit in a departmental store, national income and foreign exchange reserves, prices and dividend of shares in a stock exchange etc. are examples of economic and business time series. The thesis discuses the application of a threshold autoregressive(TAR) model, try to fit this model to a time series data. Another important non-linear model is the ARCH model, and the third model is the TARCH model. The main objective here is to identify an appropriate model to a given set of data. The data considered are the daily coconut oil prices for a period of three years. Since it is a price data the consecutive prices may not be independent and hence a time series based model is more appropriate. In this study the properties like ergodicity, mixing property and time reversibility and also various estimation procedures used to estimate the unknown parameters of the process.
Resumo:
Es ist bekannt, dass die Dichte eines gelösten Stoffes die Richtung und die Stärke seiner Bewegung im Untergrund entscheidend bestimmen kann. Eine Vielzahl von Untersuchungen hat gezeigt, dass die Verteilung der Durchlässigkeiten eines porösen Mediums diese Dichteffekte verstärken oder abmindern kann. Wie sich dieser gekoppelte Effekt auf die Vermischung zweier Fluide auswirkt, wurde in dieser Arbeit untersucht und dabei das experimentelle sowohl mit dem numerischen als auch mit dem analytischen Modell gekoppelt. Die auf der Störungstheorie basierende stochastische Theorie der macrodispersion wurde in dieser Arbeit für den Fall der transversalen Makodispersion. Für den Fall einer stabilen Schichtung wurde in einem Modelltank (10m x 1.2m x 0.1m) der Universität Kassel eine Serie sorgfältig kontrollierter zweidimensionaler Experimente an einem stochastisch heterogenen Modellaquifer durchgeführt. Es wurden Versuchsreihen mit variierenden Konzentrationsdifferenzen (250 ppm bis 100 000 ppm) und Strömungsgeschwindigkeiten (u = 1 m/ d bis 8 m/d) an drei verschieden anisotrop gepackten porösen Medien mit variierender Varianzen und Korrelationen der lognormal verteilten Permeabilitäten durchgeführt. Die stationäre räumliche Konzentrationsausbreitung der sich ausbreitenden Salzwasserfahne wurde anhand der Leitfähigkeit gemessen und aus der Höhendifferenz des 84- und 16-prozentigen relativen Konzentrationsdurchgang die Dispersion berechnet. Parallel dazu wurde ein numerisches Modell mit dem dichteabhängigen Finite-Elemente-Strömungs- und Transport-Programm SUTRA aufgestellt. Mit dem kalibrierten numerischen Modell wurden Prognosen für mögliche Transportszenarien, Sensitivitätsanalysen und stochastische Simulationen nach der Monte-Carlo-Methode durchgeführt. Die Einstellung der Strömungsgeschwindigkeit erfolgte - sowohl im experimentellen als auch im numerischen Modell - über konstante Druckränder an den Ein- und Auslauftanks. Dabei zeigte sich eine starke Sensitivität der räumlichen Konzentrationsausbreitung hinsichtlich lokaler Druckvariationen. Die Untersuchungen ergaben, dass sich die Konzentrationsfahne mit steigendem Abstand von der Einströmkante wellenförmig einem effektiven Wert annähert, aus dem die Makrodispersivität ermittelt werden kann. Dabei zeigten sich sichtbare nichtergodische Effekte, d.h. starke Abweichungen in den zweiten räumlichen Momenten der Konzentrationsverteilung der deterministischen Experimente von den Erwartungswerten aus der stochastischen Theorie. Die transversale Makrodispersivität stieg proportional zur Varianz und Korrelation der lognormalen Permeabilitätsverteilung und umgekehrt proportional zur Strömungsgeschwindigkeit und Dichtedifferenz zweier Fluide. Aus dem von Welty et al. [2003] mittels Störungstheorie entwickelten dichteabhängigen Makrodispersionstensor konnte in dieser Arbeit die stochastische Formel für die transversale Makrodispersion weiter entwickelt und - sowohl experimentell als auch numerisch - verifiziert werden.
Resumo:
Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles’ arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles’ motion underneath plasma membranes is not purely random, but biased towards the membrane.
Resumo:
A key capability of data-race detectors is to determine whether one thread executes logically in parallel with another or whether the threads must operate in series. This paper provides two algorithms, one serial and one parallel, to maintain series-parallel (SP) relationships "on the fly" for fork-join multithreaded programs. The serial SP-order algorithm runs in O(1) amortized time per operation. In contrast, the previously best algorithm requires a time per operation that is proportional to Tarjan’s functional inverse of Ackermann’s function. SP-order employs an order-maintenance data structure that allows us to implement a more efficient "English-Hebrew" labeling scheme than was used in earlier race detectors, which immediately yields an improved determinacy-race detector. In particular, any fork-join program running in T₁ time on a single processor can be checked on the fly for determinacy races in O(T₁) time. Corresponding improved bounds can also be obtained for more sophisticated data-race detectors, for example, those that use locks. By combining SP-order with Feng and Leiserson’s serial SP-bags algorithm, we obtain a parallel SP-maintenance algorithm, called SP-hybrid. Suppose that a fork-join program has n threads, T₁ work, and a critical-path length of T[subscript â]. When executed on P processors, we prove that SP-hybrid runs in O((T₁/P + PT[subscript â]) lg n) expected time. To understand this bound, consider that the original program obtains linear speed-up over a 1-processor execution when P = O(T₁/T[subscript â]). In contrast, SP-hybrid obtains linear speed-up when P = O(√T₁/T[subscript â]), but the work is increased by a factor of O(lg n).
Resumo:
The representation of the diurnal cycle in the Hadley Centre climate model is evaluated using simulations of the infrared radiances observed by Meteosat 7. In both the window and water vapour channels, the standard version of the model with 19 levels produces a good simulation of the geographical distributions of the mean radiances and of the amplitude of the diurnal cycle. Increasing the vertical resolution to 30 levels leads to further improvements in the mean fields. The timing of the maximum and minimum radiances reveals significant model errors, however, which are sensitive to the frequency with which the radiation scheme is called. In most regions, these errors are consistent with well documented errors in the timing of convective precipitation, which peaks before noon in the model, in contrast to the observed peak in the late afternoon or evening. When the radiation scheme is called every model time step (half an hour), as opposed to every three hours in the standard version, the timing of the minimum radiance is improved for convective regions over central Africa, due to the creation of upper-level layer-cloud by detrainment from the convection scheme, which persists well after the convection itself has dissipated. However, this produces a decoupling between the timing of the diurnal cycles of precipitation and window channel radiance. The possibility is raised that a similar decoupling may occur in reality and the implications of this for the retrieval of the diurnal cycle of precipitation from infrared radiances are discussed.
Resumo:
A high resolution regional atmosphere model is used to investigate the sensitivity of the North Atlantic storm track to the spatial and temporal resolution of the sea surface temperature (SST) data used as a lower boundary condition. The model is run over an unusually large domain covering all of the North Atlantic and Europe, and is shown to produce a very good simulation of the observed storm track structure. The model is forced at the lateral boundaries with 15–20 years of data from the ERA-40 reanalysis, and at the lower boundary by SST data of differing resolution. The impacts of increasing spatial and temporal resolution are assessed separately, and in both cases increasing the resolution leads to subtle, but significant changes in the storm track. In some, but not all cases these changes act to reduce the small storm track biases seen in the model when it is forced with low-resolution SSTs. In addition there are several clear mesoscale responses to increased spatial SST resolution, with surface heat fluxes and convective precipitation increasing by 10–20% along the Gulf Stream SST gradient.
Resumo:
The impacts of afforestation at Plynlimon in the Severn catchment, mid-Wales. and in the Bedford Ouse catchment in south-east England are evaluated using the INCA model to simulate Nitrogen (N) fluxes and concentrations. The INCA model represents the key hydrological and N processes operating in catchments and simulates the daily dynamic behaviour as well as the annual fluxes. INCA has been applied to five years of data front the Hafren and Hore headwater sub-catchments (6.8 km(2) area in total) of the River Severn at Plytilimon and the model was calibrated and validated against field data. Simulation of afforestation is achieved by altering the uptake rate parameters in the model. INCA simulates the daily N behaviour in the catchments with good accuracy as well as reconstructing the annual budgets for N release following clearfelling a four-fold increase in N fluxes was followed by a slow recovery after re-afforestation. For comparison, INCA has been applied to the large (8380 km(2)) Bedford Ouse catchment to investigate the impact of replacing 20% arable land with forestry. The reduction in fertiliser inputs from arable farming and the N uptake by the forest are predicted to reduce the N flux reaching the main river system, leading to a 33% reduction in N-Nitrate concentrations in the river water.
Resumo:
Current changes in tropical precipitation from satellite data and climate models are assessed. Wet and dry regions of the tropics are defined as the highest 30% and lowest 70% of monthly precipitation values. Observed tropical ocean trends in the wet regime (1.8%/decade) and the dry regions (−2.6%/decade) according to the Global Precipitation Climatology Project (GPCP) over the period including Special Sensor Microwave Imager (SSM/I) data (1988–2008), where GPCP is believed to be more reliable, are of smaller magnitude than when including the entire time series (1979–2008) and closer to model simulations than previous comparisons. Analysing changes in extreme precipitation using daily data within the wet regions, an increase in the frequency of the heaviest 6% of events with warming for the SSM/I observations and model ensemble mean is identified. The SSM/I data indicate an increased frequency of the heaviest events with warming, several times larger than the expected Clausius–Clapeyron scaling and at the upper limit of the substantial range in responses in the model simulations.
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
The multidecadal variability of El Niño–Southern Oscillation (ENSO)–South Asian monsoon relationship is elucidated in a 1000 year control simulation of a coupled general circulation model. The results indicate that the Atlantic Multidecadal Oscillation (AMO), resulting from the natural fluctuation of the Atlantic Meridional Overturning Circulation (AMOC), plays an important role in modulating the multidecadal variation of the ENSO-monsoon relationship. The sea surface temperature anomalies associated with the AMO induce not only significant climate impact in the Atlantic but also the coupled feedbacks in the tropical Pacific regions. The remote responses in the Pacific Ocean to a positive phase of the AMO which is resulted from enhanced AMOC in the model simulation and are characterized by statistically significant warming in the North Pacific and in the western tropical Pacific, a relaxation of tropical easterly trades in the central and eastern tropical Pacific, and a deeper thermocline in the eastern tropical Pacific. These changes in mean states lead to a reduction of ENSO variability and therefore a weakening of the ENSO-monsoon relationship. This study suggests a nonlocal mechanism for the low-frequency fluctuation of the ENSO-monsoon relationship, although the AMO explains only a fraction of the ENSO–South Asian monsoon variation on decadal-multidecadal timescale. Given the multidecadal variation of the AMOC and therefore of the AMO exhibit decadal predictability, this study highlights the possibility that a part of the change of climate variability in the Pacific Ocean and its teleconnection may be predictable.
Resumo:
Apical leaf necrosis is a physiological process related to nitrogen (N) dynamics in the leaf. Pathogens use leaf nutrients and can thus accelerate this physiological apical necrosis. This process differs from necrosis occurring around pathogen lesions (lesion-induced necrosis), which is a direct result of the interaction between pathogen hyphae and leaf cells. This paper primarily concentrates on apical necrosis, only incorporating lesion-induced necrosis by necessity. The relationship between pathogen dynamics and physiological apical leaf necrosis is modelled through leaf nitrogen dynamics. The specific case of Puccinia triticina infections on Triticum aestivum flag leaves is studied. In the model, conversion of indirectly available N in the form of, for example, leaf cell proteins (N-2(t)) into directly available N (N-1(t), i.e. the form of N that can directly be used by either pathogen or plant sinks) results in apical necrosis. The model reproduces observed trends of disease severity, apical necrosis and green leaf area (GLA) and leaf N dynamics of uninfected and infected leaves. Decreasing the initial amount of directly available N results in earlier necrosis onset and longer necrosis duration. Decreasing the initial amount of indirectly available N, has no effect on necrosis onset and shortens necrosis duration. The model could be used to develop hypotheses on how the disease-GLA relation affects yield loss, which can be tested experimentally. Upon incorporation into crop simulation models, the model might provide a tool to more accurately estimate crop yield and effects of disease management strategies in crops sensitive to fungal pathogens.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud