878 resultados para physically based modeling
Resumo:
[1] We present a new, process-based model of soil and stream water dissolved organic carbon (DOC): the Integrated Catchments Model for Carbon (INCA-C). INCA-C is the first model of DOC cycling to explicitly include effects of different land cover types, hydrological flow paths, in-soil carbon biogeochemistry, and surface water processes on in-stream DOC concentrations. It can be calibrated using only routinely available monitoring data. INCA-C simulates daily DOC concentrations over a period of years to decades. Sources, sinks, and transformation of solid and dissolved organic carbon in peat and forest soils, wetlands, and streams as well as organic carbon mineralization in stream waters are modeled. INCA-C is designed to be applied to natural and seminatural forested and peat-dominated catchments in boreal and temperate regions. Simulations at two forested catchments showed that seasonal and interannual patterns of DOC concentration could be modeled using climate-related parameters alone. A sensitivity analysis showed that model predictions were dependent on the mass of organic carbon in the soil and that in-soil process rates were dependent on soil moisture status. Sensitive rate coefficients in the model included those for organic carbon sorption and desorption and DOC mineralization in the soil. The model was also sensitive to the amount of litter fall. Our results show the importance of climate variability in controlling surface water DOC concentrations and suggest the need for further research on the mechanisms controlling production and consumption of DOC in soils.
Resumo:
This study presents a numerical method to derive the Darcy- Weisbach friction coefficient for overland flow under partial inundation of surface roughness. To better account for the variable influence of roughness with varying levels of emergence, we model the flow over a network which evolves as the free surface rises. This network is constructed using a height numerical map, based on surface roughness data, and a discrete geometry skeletonization algorithm. By applying a hydraulic model to the flows through this network, local heads, velocities, and Froude and Reynolds numbers over the surface can be estimated. These quantities enable us to analyze the flow and ultimately to derive a bulk friction factor for flow over the entire surface which takes into account local variations in flow quantities. Results demonstrate that although the flow is laminar, head losses are chiefly inertial because of local flow disturbances. The results also emphasize that for conditions of partial inundation, flow resistance varies nonmonotonically but does generally increase with progressive roughness inundation.
Case study of the use of remotely sensed data for modeling flood inundation on the river Severn, UK.
Resumo:
A methodology for using remotely sensed data to both generate and evaluate a hydraulic model of floodplain inundation is presented for a rural case study in the United Kingdom: Upton-upon-Severn. Remotely sensed data have been processed and assembled to provide an excellent test data set for both model construction and validation. In order to assess the usefulness of the data and the issues encountered in their use, two models for floodplain inundation were constructed: one based on an industry standard one-dimensional approach and the other based on a simple two-dimensional approach. The results and their implications for the future use of remotely sensed data for predicting flood inundation are discussed. Key conclusions for the use of remotely sensed data are that care must be taken to integrate different data sources for both model construction and validation and that improvements in ground height data shift the focus in terms of model uncertainties to other sources such as boundary conditions. The differences between the two models are found to be of minor significance.
Resumo:
The problem of modeling solar energetic particle (SEP) events is important to both space weather research and forecasting, and yet it has seen relatively little progress. Most important SEP events are associated with coronal mass ejections (CMEs) that drive coronal and interplanetary shocks. These shocks can continuously produce accelerated particles from the ambient medium to well beyond 1 AU. This paper describes an effort to model real SEP events using a Center for Integrated Space weather Modeling (CISM) MHD solar wind simulation including a cone model of CMEs to initiate the related shocks. In addition to providing observation-inspired shock geometry and characteristics, this MHD simulation describes the time-dependent observer field line connections to the shock source. As a first approximation, we assume a shock jump-parameterized source strength and spectrum, and that scatter-free transport occurs outside of the shock source, thus emphasizing the role the shock evolution plays in determining the modeled SEP event profile. Three halo CME events on May 12, 1997, November 4, 1997 and December 13, 2006 are used to test the modeling approach. While challenges arise in the identification and characterization of the shocks in the MHD model results, this approach illustrates the importance to SEP event modeling of globally simulating the underlying heliospheric event. The results also suggest the potential utility of such a model for forcasting and for interpretation of separated multipoint measurements such as those expected from the STEREO mission.
Resumo:
The geospace environment is controlled largely by events on the Sun, such as solar flares and coronal mass ejections, which generate significant geomagnetic and upper atmospheric disturbances. The study of this Sun-Earth system, which has become known as space weather, has both intrinsic scientific interest and practical applications. Adverse conditions in space can damage satellites and disrupt communications, navigation, and electric power grids, as well as endanger astronauts. The Center for Integrated Space Weather Modeling (CISM), a Science and Technology Center (STC) funded by the U.S. National Science Foundation (see http://www.bu.edu/cism/), is developing a suite of integrated physics-based computer models that describe the space environment from the Sun to the Earth for use in both research and operations [Hughes and Hudson, 2004, p. 1241]. To further this mission, advanced education and training programs sponsored by CISM encourage students to view space weather as a system that encompasses the Sun, the solar wind, the magnetosphere, and the ionosphere/thermosphere. This holds especially true for participants in the CISM space weather summer school [Simpson, 2004].
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
Two models for predicting Septoria tritici on winter wheat (cv. Ri-band) were developed using a program based on an iterative search of correlations between disease severity and weather. Data from four consecutive cropping seasons (1993/94 until 1996/97) at nine sites throughout England were used. A qualitative model predicted the presence or absence of Septoria tritici (at a 5% severity threshold within the top three leaf layers) using winter temperature (January/February) and wind speed to about the first node detectable growth stage. For sites above the disease threshold, a quantitative model predicted severity of Septoria tritici using rainfall during stern elongation. A test statistic was derived to test the validity of the iterative search used to obtain both models. This statistic was used in combination with bootstrap analyses in which the search program was rerun using weather data from previous years, therefore uncorrelated with the disease data, to investigate how likely correlations such as the ones found in our models would have been in the absence of genuine relationships.
Resumo:
Milk supply from Mexican dairy farms does not meet demand and small-scale farms can contribute toward closing the gap. Two multi-criteria programming techniques, goal programming and compromise programming, were used in a study of small-scale dairy farms in central Mexico. To build the goal and compromise programming models, 4 ordinary linear programming models were also developed, which had objective functions to maximize metabolizable energy for milk production, to maximize margin of income over feed costs, to maximize metabolizable protein for milk production, and to minimize purchased feedstuffs. Neither multicriteria approach was significantly better than the other; however, by applying both models it was possible to perform a more comprehensive analysis of these small-scale dairy systems. The multi-criteria programming models affirm findings from previous work and suggest that a forage strategy based on alfalfa, rye-grass, and corn silage would meet nutrient requirements of the herd. Both models suggested that there is an economic advantage in rescheduling the calving season to the second and third calendar quarters to better synchronize higher demand for nutrients with the period of high forage availability.
Resumo:
Inferring population admixture from genetic data and quantifying it is a difficult but crucial task in evolutionary and conservation biology. Unfortunately state-of-the-art probabilistic approaches are computationally demanding. Effectively exploiting the computational power of modern multiprocessor systems can thus have a positive impact to Monte Carlo-based simulation of admixture modeling. A novel parallel approach is briefly described and promising results on its message passing interface (MPI)-based C implementation are reported.
Resumo:
The nicotinic Acetylcholine Receptor (nAChR) is the major class of neurotransmitter receptors that is involved in many neurodegenerative conditions such as schizophrenia, Alzheimer's and Parkinson's diseases. The N-terminal region or Ligand Binding Domain (LBD) of nAChR is located at pre- and post-synaptic nervous system, which mediates synaptic transmission. nAChR acts as the drug target for agonist and competitive antagonist molecules that modulate signal transmission at the nerve terminals. Based on Acetylcholine Binding Protein (AChBP) from Lymnea stagnalis as the structural template, the homology modeling approach was carried out to build three dimensional model of the N-terminal region of human alpha(7)nAChR. This theoretical model is an assembly of five alpha(7) subunits with 5 fold axis symmetry, constituting a channel, with the binding picket present at the interface region of the subunits. alpha-netlrotoxin is a potent nAChR competitive antagonist that readily blocks the channel resulting in paralysis. The molecular interaction of alpha-Bungarotoxin, a long chain alpha-neurotoxin from (Bungarus multicinctus) and human alpha(7)nAChR seas studied. Agonists such as acetylcholine, nicotine, which are used in it diverse array of biological activities, such as enhancements of cognitive performances, were also docked with the theoretical model of human alpha(7)nAChR. These docked complexes were analyzed further for identifying the crucial residues involved in interaction. These results provide the details of interaction of agonists and competitive antagonists with three dimensional model of the N-terminal region of human alpha(7)nAChR and thereby point to the design of novel lead compounds.
Resumo:
This chapter introduces ABMs, their construction, and the pros and cons of their use. Although relatively new, agent-basedmodels (ABMs) have great potential for use in ecotoxicological research – their primary advantage being the realistic simulations that can be constructed and particularly their explicit handling of space and time in simulations. Examples are provided of their use in ecotoxicology primarily exemplified by different implementations of the ALMaSS system. These examples presented demonstrate how multiple stressors, landscape structure, details regarding toxicology, animal behavior, and socioeconomic effects can and should be taken into account when constructing simulations for risk assessment. Like ecological systems, in ABMs the behavior at the system level is not simply the mean of the component responses, but the sum of the often nonlinear interactions between components in the system; hence this modeling approach opens the door to implementing and testing much more realistic and holistic ecotoxicological models than are currently used.
Resumo:
This investigation deals with the question of when a particular population can be considered to be disease-free. The motivation is the case of BSE where specific birth cohorts may present distinct disease-free subpopulations. The specific objective is to develop a statistical approach suitable for documenting freedom of disease, in particular, freedom from BSE in birth cohorts. The approach is based upon a geometric waiting time distribution for the occurrence of positive surveillance results and formalizes the relationship between design prevalence, cumulative sample size and statistical power. The simple geometric waiting time model is further modified to account for the diagnostic sensitivity and specificity associated with the detection of disease. This is exemplified for BSE using two different models for the diagnostic sensitivity. The model is furthermore modified in such a way that a set of different values for the design prevalence in the surveillance streams can be accommodated (prevalence heterogeneity) and a general expression for the power function is developed. For illustration, numerical results for BSE suggest that currently (data status September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity.
Resumo:
Polymetallic nanodimensional assemblies have been prepared via metal directed assembly of dithiocarbamate functionalized cavitand structural frameworks with late transition metals (Ni, Pd, Cu, Au, Zn, and Cd). The coordination geometry about the metal centers is shown to dictate the architecture adopted. X-ray crystallographic studies confirm that square planar coordination geometries result in "cagelike" octanuclear complexes, whereas square-based pyramidal metal geometries favor hexanuclear "molecular loop" structures. Both classes of complex are sterically and electronically complementary to the fullerenes (C-60 and C-70). The strong binding of these guests occurred via favorable interactions with the sulfur atoms of multiple dithiocarbamate moieties of the hosts. In the case of the tetrameric copper(II) complexes, the lability of the copper(II)-dithiocarbamate bond enabled the fullerene guests to be encapsulated in the electron-rich cavity of the host, over time. The examination of the binding of fullerenes has been undertaken using spectroscopic and electrochemical methods, electrospray mass spectrometry, and molecular modeling.
Resumo:
Optical density measurements were used to estimate the effect of heat treatments on the single-cell lag times of Listeria innocua fitted to a shifted gamma distribution. The single-cell lag time was subdivided into repair time ( the shift of the distribution assumed to be uniform for all cells) and adjustment time (varying randomly from cell to cell). After heat treatments in which all of the cells recovered (sublethal), the repair time and the mean and the variance of the single-cell adjustment time increased with the severity of the treatment. When the heat treatments resulted in a loss of viability (lethal), the repair time of the survivors increased with the decimal reduction of the cell numbers independently of the temperature, while the mean and variance of the single-cell adjustment times remained the same irrespective of the heat treatment. Based on these observations and modeling of the effect of time and temperature of the heat treatment, we propose that the severity of a heat treatment can be characterized by the repair time of the cells whether the heat treatment is lethal or not, an extension of the F value concept for sublethal heat treatments. In addition, the repair time could be interpreted as the extent or degree of injury with a multiple-hit lethality model. Another implication of these results is that the distribution of the time for cells to reach unacceptable numbers in food is not affected by the time-temperature combination resulting in a given decimal reduction.