923 resultados para Ratio-Dependant Predator-Prey Model
Resumo:
Fiber reinforced polymer composites (FRP) have found widespread usage in the repair and strengthening of concrete structures. FRP composites exhibit high strength-to-weight ratio, corrosion resistance, and are convenient to use in repair applications. Externally bonded FRP flexural strengthening of concrete beams is the most extended application of this technique. A common cause of failure in such members is associated with intermediate crack-induced debonding (IC debonding) of the FRP substrate from the concrete in an abrupt manner. Continuous monitoring of the concrete?FRP interface is essential to pre- vent IC debonding. Objective condition assessment and performance evaluation are challenging activities since they require some type of monitoring to track the response over a period of time. In this paper, a multi-objective model updating method integrated in the context of structural health monitoring is demonstrated as promising technology for the safety and reliability of this kind of strengthening technique. The proposed method, solved by a multi-objective extension of the particle swarm optimization method, is based on strain measurements under controlled loading. The use of permanently installed fiber Bragg grating (FBG) sensors embedded into the FRP-concrete interface or bonded onto the FRP strip together with the proposed methodology results in an automated method able to operate in an unsupervised mode.
Resumo:
The CENTURY soil organic matter model was adapted for the DSSAT (Decision Support System for Agrotechnology Transfer), modular format in order to better simulate the dynamics of soil organic nutrient processes (Gijsman et al., 2002). The CENTURY model divides the soil organic carbon (SOC) into three hypothetical pools: microbial or active material (SOC1), intermediate (SOC2) and the largely inert and stable material (SOC3) (Jones et al., 2003). At the beginning of the simulation, CENTURY model needs a value of SOC3 per soil layer which can be estimated by the model (based on soil texture and management history) or given as an input. Then, the model assigns about 5% and 95% of the remaining SOC to SOC1 and SOC2, respectively. The model performance when simulating SOC and nitrogen (N) dynamics strongly depends on the initialization process. The common methods (e.g. Basso et al., 2011) to initialize SOC pools deal mostly with carbon (C) mineralization processes and less with N. Dynamics of SOM, SOC, and soil organic N are linked in the CENTURY-DSSAT model through the C/N ratio of decomposing material that determines either mineralization or immobilization of N (Gijsman et al., 2002). The aim of this study was to evaluate an alternative method to initialize the SOC pools in the DSSAT-CENTURY model from apparent soil N mineralization (Napmin) field measurements by using automatic inverse calibration (simulated annealing). The results were compared with the ones obtained by the iterative initialization procedure developed by Basso et al., 2011.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
This paper presents a Finite Element Model, which has been used for forecasting the diffusion of innovations in time and space. Unlike conventional models used in diffusion literature, the model considers the spatial heterogeneity. The implementation steps of the model are explained by applying it to the case of diffusion of photovoltaic systems in a local region in southern Germany. The applied model is based on a parabolic partial differential equation that describes the diffusion ratio of photovoltaic systems in a given region over time. The results of the application show that the Finite Element Model constitutes a powerful tool to better understand the diffusion of an innovation as a simultaneous space-time process. For future research, model limitations and possible extensions are also discussed.
Resumo:
Asimple semi-empirical model for the aerodynamic behavior of a low-aspect ratio pararotor in autorotation at low Reynolds numbers is presented. The paper is split into three sections: Sec. II deals with the theoretical model derivation, Sec. III deals with the wind-tunnel measurements needed for tuning the theoretical model, and Sec. IV deals with the tuning between the theoretical model and the experimental data. The study is focused on the effect of both the blade pitch angle and the blade roughness and also on the stream velocity, on the rotation velocity, and on the drag of a model. Flow pattern visualizations have also been performed. The value of the free aerodynamic parameters of the semi-empirical model that produces the best fit with the experimental results agrees with the expected ones for the blades at the test conditions. Finally, the model is able to describe the behavior of a pararotor in autorotation that rotates fixed to a shaft, validated for a range of blade pitch angles. The movement of the device is found to be governed by a reduced set of dimensionless parameters.
Resumo:
The snowshoe hare and the Canadian lynx in the boreal forests of North America show 9- to 11-year density cycles. These are generally assumed to be linked to each other because lynx are specialist predators on hares. Based on time series data for hare and lynx, we show that the dominant dimensional structure of the hare series appears to be three whereas that of the lynx is two. The three-dimensional structure of the hare time series is hypothesized to be due to a three-trophic level model in which the hare may be seen as simultaneously regulated from below and above. The plant species in the hare diet appear compensatory to one another, and the predator species may, likewise, be seen as an internally compensatory guild. The lynx time series are, in contrast, consistent with a model of donor control in which their populations are regulated from below by prey availability. Thus our analysis suggests that the classic view of a symmetric hare–lynx interaction is too simplistic. Specifically, we argue that the classic food chain structure is inappropriate: the hare is influenced by many predators other than the lynx, and the lynx is primarily influenced by the snowshoe hare.
Resumo:
An improved mammalian two-hybrid system designed for interaction trap screening is described in this paper. CV-1/EBNA-1 monkey kidney epithelial cells expressing Epstein–Barr virus nuclear antigen 1 (EBNA-1) were stably transfected with a reporter plasmid for GAL4-dependent expression of the green fluorescent protein (GFP). A resulting clone, GB133, expressed GFP strongly when transfected transiently with transcriptional activators fused to GAL4 DNA-binding domain with minimal background GFP expression. GB133 cells maintained plasmids containing the OriP Epstein–Barr virus replication origin that directs replication of plasmids in mammalian cells in the presence of the EBNA-1 protein. GB133 cells transfected stably with a model bait expressed GFP when further transfected transiently with an expression plasmid for a known positive prey. When the bait-expressing GB133 cells were transfected transiently with an OriP-containing expression plasmid for the positive prey together with excess amounts of empty vector, cells that received the positive prey were readily identified by green fluorescence in cell culture and eventually formed green fluorescent microcolonies, because the prey plasmid was maintained by the EBNA-1/Ori-P system. The green fluorescent microcolonies were harvested directly from the culture dishes under a fluorescence microscope, and total DNA was then prepared. Prey-encoding cDNA was recovered by PCR using primers annealing to the vector sequences flanking the insert-cloning site. This system should be useful in mammalian cells for efficient screening of cDNA libraries by two-hybrid interaction.
Resumo:
Many prey modify traits in response to predation risk and this modification of traits can influence the prey's resource acquisition rate. A predator thus can have a “nonlethal” impact on prey that can lead to indirect effects on other community members. Such indirect interactions are termed trait-mediated indirect interactions because they arise from a predator's influence on prey traits, rather than prey density. Because such nonlethal predator effects are immediate, can influence the entire prey population, and can occur over the entire prey lifetime, we argue that nonlethal predator effects are likely to contribute strongly to the net indirect effects of predators (i.e., nonlethal effects may be comparable in magnitude to those resulting from killing prey). This prediction was supported by an experiment in which the indirect effects of a larval dragonfly (Anax sp.) predator on large bullfrog tadpoles (Rana catesbeiana), through nonlethal effects on competing small bullfrog tadpoles, were large relative to indirect effects caused by density reduction of the small tadpoles (the lethal effect). Treatments in which lethal and nonlethal effects of Anax were manipulated independently indicated that this result was robust for a large range of different combinations of lethal and nonlethal effects. Because many, if not most, prey modify traits in response to predators, our results suggest that the magnitude of interaction coefficients between two species may often be dynamically related to changes in other community members, and that many indirect effects previously attributed to the lethal effects of predators may instead be due to shifts in traits of surviving prey.
Resumo:
The kinetics of amyloid fibril formation by beta-amyloid peptide (Abeta) are typical of a nucleation-dependent polymerization mechanism. This type of mechanism suggests that the study of the interaction of Abeta with itself can provide some valuable insights into Alzheimer disease amyloidosis. Interaction of Abeta with itself was explored with the yeast two-hybrid system. Fusion proteins were created by linking the Abeta fragment to a LexA DNA-binding domain (bait) and also to a B42 transactivation domain (prey). Protein-protein interactions were measured by expression of these fusion proteins in Saccharomyces cerevisiae harboring lacZ (beta-galactosidase) and LEU2 (leucine utilization) genes under the control of LexA-dependent operators. This approach suggests that the Abeta molecule is capable of interacting with itself in vivo in the yeast cell nucleus. LexA protein fused to the Drosophila protein bicoid (LexA-bicoid) failed to interact with the B42 fragment fused to Abeta, indicating that the observed Abeta-Abeta interaction was specific. Specificity was further shown by the finding that no significant interaction was observed in yeast expressing LexA-Abeta bait when the B42 transactivation domain was fused to an Abeta fragment with Phe-Phe at residues 19 and 20 replaced by Thr-Thr (AbetaTT), a finding that is consistent with in vitro observations made by others. Moreover, when a peptide fragment bearing this substitution was mixed with native Abeta-(1-40), it inhibited formation of fibrils in vitro as examined by electron microscopy. The findings presented in this paper suggest that the two-hybrid system can be used to study the interaction of Abeta monomers and to define the peptide sequences that may be important in nucleation-dependent aggregation.
Resumo:
Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.
Resumo:
Within the framework of the Baikal Drilling Project (BDP), a 192 m long sediment core (BDP-96-1) was recovered from the Academician Ridge, a submerged topographic high between the North and Central Basins of Lake Baikal. Sedimentological, clay mineralogical and geochemical investigations were carried out on the core interval between 90 and 124 m depth, corresponding to ca. 2.4-3.4 Ma. The aim was to reconstruct the climatic and tectonic history of the continental region during the intensification of Northern Hemisphere glaciation in Late Pliocene time. A major climate change occurred in the Lake Baikal area at about 2.65 Ma. Enhanced physical weathering in the catchment, mirrored in the illite to smectite ratio, and temporarily reduced bioproduction in the lake, reflected by the diatom abundance, evidence a change towards a colder and more arid climate, probably associated with an intensification of the Siberian High. In addition, the coincident onset of distinct fluctuations in these parameters and in the Zr/Al ratio suggests the beginning of the Late Cenozoic high amplitude climate cycles at about 2.65 Ma. Fluctuations in the Zr/Al ratio are traced back to changes in the aeolian input, with high values in warmer, more humid phases due to a weaker Siberian High. Assuming that the sand content in the sediment reflects tectonic pulses, the Lake Baikal area was tectonically active during the entire investigated period, but in particular around 2.65 Ma. Tectonic movements have likely led to a gradual catchment change since about 3.15 Ma from the western towards the eastern lake surroundings, as indicated in the geochemistry and clay mineralogy of the sediments. The strong coincidence between tectonic and climatic changes in the Baikal area hints at the Himalayan uplift being one of the triggers for the Northern Hemisphere Glaciation.
Resumo:
Benthic d13C values (F. wuellerstorfi), kaolinite/chlorite ratios and sortable silt median grain sizes in sediments of a core from the abyssal Agulhas Basin record the varying impact of North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW) during the last 200 ka. The data indicate that NADW influence decreased during glacials and increased during interglacials, in concert with the global climatic changes of the late Quaternary. In contrast, AABW displays a much more complex behaviour. Two independent modes of deep-water formation contributed to the AABW production in the Weddell Sea: 1) brine rejection during sea ice formation in polynyas and in the sea ice zone (Polynya Mode) and 2) super-cooling of Ice Shelf Water (ISW) beneath the Antarctic ice shelves (Ice Shelf Mode). Varying contributions of the two modes lead to a high millennial-scale variability of AABW production and export to the Agulhas Basin. Highest rates of AABW production occur during early glacials when increased sea ice formation and an active ISW production formed substantial amounts of deep water. Once full glacial conditions were reached and the Antarctic ice sheet grounded on the shelf, ISW production shut down and only brine rejection generated moderate amounts of deep water. AABW production rates dropped to an absolute minimum during Terminations I and II and the Marine Isotope Transition (MIS) 4/3 transition. Reduced sea ice formation concurrent with an enhanced fresh water influx from melting ice lowered the density of the surface water in the Weddell Sea, thus further reducing deep water formation via brine rejection, while the ISW formation was not yet operating again. During interglacials and the moderate interglacial MIS 3 both brine formation and ISW production were operating, contributing various amounts to AABW formation in the Weddell Sea.