935 resultados para Input-output model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comprobar la pertinencia de utilizar un input (los materiales que se presentan al alumno) muy controlado, progresivo y en el que se van introduciendo elementos que van un poco más allá de la realización del alumno siguiendo a Krashen. Analizar el efecto que este input tiene en la expresión escrita libre y comunicativa que es el output (lo que el alumno demuestra que ha aprendido) comprobable y analizable, y que a su vez nos dará la medida del intake (lo que el alumno ha asimilado). Analizar las implicaciones teóricas, lingüísticas y psicolingüísticas de este estudio. 47 alumnos, procedentes de distintas partes de España, que siguen una enseñanza a distancia, mayores de 25 años. 44 alumnos de tercero de BUP que han seguido una enseñanza reglada y profesional durante 6 años. La muestra contiene un total de 1022 realizaciones que se analizan a partir de una ficha informatizada. Se considera fundamental la relación que hay entre el input de cada una de las unidades de Cher Ami y el output del alumno después de 4-8 horas de trabajo, a partir de unos modelos de lecturas. Frente a estas lecturas, los alumnos deben contestar en un texto escrito de 5-10 frases. Se diseña una ficha donde los profesores tutores de los centros participantes transcriben las realizaciones escritas libres de los alumnos en cada una de las 18 unidades. Cada tutor estudia los errores, analiza semántica y pragmáticamente, y señala las realizaciones relevantes. El programa informático compara el input utilizado con el input dado, y elabora listas con el input utilizado, el no utilizado y extrainput (elementos utilizados por el alumno que no figuran en los materiales dados). El equipo investigador unifica criterios, limpia las listas correspondientes y prepara la entrada de datos para el análisis de resultados. Prueba, programa de ordenador. Porcentajes, tablas, estudio contrastivo. Se agrupan en cuatro grandes bloques. 1. Resultados por unidades didácticas que permiten un análisis de contraste con los contenidos y estrategias de cada una de las unidades del método utilizado, y llevan a la modificación de algunos aspectos concretos de dichos materiales. 2. El análisis global, que contiene los análisis de los errores más frecuentes y sus causas. El análisis semántico permite conocer los campos de interés del alumno, con un tipo de realizaciones muy centradas en sí mismos y en su entorno inmediato. 3. El estudio contrastivo con Bachiller ofrece como resultado el hecho sorprendente de que en un año y con una metodología programada se obtienen en el CAD resultados mucho más satisfactorios que en alumnos de bachiller que estudian el idioma de manera presencial por sexto año. 4. En el estudio individual secuencial, aunque los resultados son menos definitivos, se observa un mayor progreso en los alumnos que respetan el input, mientras que los que utilizan elementos de extrainput no sólo cometen más errores, sino que algunos de ellos llegan a verse bloqueados en su propio aprendizaje. Estos dos últimos bloques de resultados prueban la teoría de los autores de la investigación, de una mayor conveniencia del aprendizaje de una microlengua LE acumulativa y controlada, en vez de una interlengua que corre el riesgo de fosilizarse y bloquear al alumno en su proceso de aprendizaje, siempre que se trate de contextos no naturales y con un planteamiento curricular. Estos objetivos comunicativos, que se deben poder medir en su progreso, proporcionan una motivación intrínseca en el alumno que le hace ser partícipe de su aprendizaje y esto contribuirá a una mayor eficacia en el proceso de enseñanza-aprendizaje de los idiomas. Conviene revisar los contenidos y métodos de la enseñanza reglada y presencial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We've developed a new ambient occlusion technique based on an information-theoretic framework. Essentially, our method computes a weighted visibility from each object polygon to all viewpoints; we then use these visibility values to obtain the information associated with each polygon. So, just as a viewpoint has information about the model's polygons, the polygons gather information on the viewpoints. We therefore have two measures associated with an information channel defined by the set of viewpoints as input and the object's polygons as output, or vice versa. From this polygonal information, we obtain an occlusion map that serves as a classic ambient occlusion technique. Our approach also offers additional applications, including an importance-based viewpoint-selection guide, and a means of enhancing object features and producing nonphotorealistic object visualizations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En les últimes dècades, l'increment dels nivells de radiació solar ultraviolada (UVR) que arriba a la Terra (principalment degut a la disminució d'ozó estratosfèric) juntament amb l'augment detectat en malalties relacionades amb l'exposició a la UVR, ha portat a un gran volum d'investigacions sobre la radiació solar en aquesta banda i els seus efectes en els humans. L'índex ultraviolat (UVI), que ha estat adoptat internacionalment, va ser definit amb el propòsit d'informar al públic general sobre els riscos d'exposar el cos nu a la UVR i per tal d'enviar missatges preventius. L'UVI es va definir inicialment com el valor màxim diari. No obstant, el seu ús actual s'ha ampliat i té sentit referir-se a un valor instantani o a una evolució diària del valor d'UVI mesurat, modelitzat o predit. El valor concret d'UVI està afectat per la geometria Sol-Terra, els núvols, l'ozó, els aerosols, l'altitud i l'albedo superficial. Les mesures d'UVI d'alta qualitat són essencials com a referència i per estudiar tendències a llarg termini; es necessiten també tècniques acurades de modelització per tal d'entendre els factors que afecten la UVR, per predir l'UVI i com a control de qualitat de les mesures. És d'esperar que les mesures més acurades d'UVI s'obtinguin amb espectroradiòmetres. No obstant, com que els costs d'aquests dispositius són elevats, és més habitual trobar dades d'UVI de radiòmetres eritemàtics (de fet, la majoria de les xarxes d'UVI estan equipades amb aquest tipus de sensors). Els millors resultats en modelització s'obtenen amb models de transferència radiativa de dispersió múltiple quan es coneix bé la informació d'entrada. No obstant, habitualment no es coneix informació d'entrada, com per exemple les propietats òptiques dels aerosols, la qual cosa pot portar a importants incerteses en la modelització. Sovint, s'utilitzen models més simples per aplicacions com ara la predicció d'UVI o l'elaboració de mapes d'UVI, ja que aquests són més ràpids i requereixen menys paràmetres d'entrada. Tenint en compte aquest marc de treball, l'objectiu general d'aquest estudi és analitzar l'acord al qual es pot arribar entre la mesura i la modelització d'UVI per condicions de cel sense núvols. D'aquesta manera, en aquest estudi es presenten comparacions model-mesura per diferents tècniques de modelització, diferents opcions d'entrada i per mesures d'UVI tant de radiòmetres eritemàtics com d'espectroradiòmeters. Com a conclusió general, es pot afirmar que la comparació model-mesura és molt útil per detectar limitacions i estimar incerteses tant en les modelitzacions com en les mesures. Pel que fa a la modelització, les principals limitacions que s'han trobat és la falta de coneixement de la informació d'aerosols considerada com a entrada dels models. També, s'han trobat importants diferències entre l'ozó mesurat des de satèl·lit i des de la superfície terrestre, la qual cosa pot portar a diferències importants en l'UVI modelitzat. PTUV, una nova i simple parametrització pel càlcul ràpid d'UVI per condicions de cel serens, ha estat desenvolupada en base a càlculs de transferència radiativa. La parametrització mostra una bona execució tant respecte el model base com en comparació amb diverses mesures d'UVI. PTUV ha demostrat la seva utilitat per aplicacions particulars com ara l'estudi de l'evolució anual de l'UVI per un cert lloc (Girona) i la composició de mapes d'alta resolució de valors d'UVI típics per un territori concret (Catalunya). En relació a les mesures, es constata que és molt important saber la resposta espectral dels radiòmetres eritemàtics per tal d'evitar grans incerteses a la mesura d'UVI. Aquest instruments, si estan ben caracteritzats, mostren una bona comparació amb els espectroradiòmetres d'alta qualitat en la mesura d'UVI. Les qüestions més importants respecte les mesures són la calibració i estabilitat a llarg termini. També, s'ha observat un efecte de temperatura en el PTFE, un material utilitzat en els difusors en alguns instruments, cosa que potencialment podria tenir implicacions importants en el camp experimental. Finalment, i pel que fa a les comparacions model-mesura, el millor acord s'ha trobat quan es consideren mesures d'UVI d'espectroradiòmetres d'alta qualitat i s'usen models de transferència radiativa que consideren les millors dades disponibles pel que fa als paràmetres òptics d'ozó i aerosols i els seus canvis en el temps. D'aquesta manera, l'acord pot ser tan alt dins un 0.1º% en UVI, i típicament entre menys d'un 3%. Aquest acord es veu altament deteriorat si s'ignora la informació d'aerosols i depèn de manera important del valor d'albedo de dispersió simple dels aerosols. Altres dades d'entrada del model, com ara l'albedo superficial i els perfils d'ozó i temperatura introdueixen una incertesa menor en els resultats de modelització.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results are presented from a new web application called OceanDIVA - Ocean Data Intercomparison and Visualization Application. This tool reads hydrographic profiles and ocean model output and presents the data on either depth levels or isotherms for viewing in Google Earth, or as probability density functions (PDFs) of regional model-data misfits. As part of the CLIVAR Global Synthesis and Observations Panel, an intercomparison of water mass properties of various ocean syntheses has been undertaken using OceanDIVA. Analysis of model-data misfits reveals significant differences between the water mass properties of the syntheses, such as the ability to capture mode water properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Atlantic meridional overturning circulation (AMOC) is an important component of the climate system. Models indicate that the AMOC can be perturbed by freshwater forcing in the North Atlantic. Using an ocean-atmosphere general circulation model, we investigate the dependence of such a perturbation of the AMOC, and the consequent climate change, on the region of freshwater forcing. A wide range of changes in AMOC strength is found after 100 years of freshwater forcing. The largest changes in AMOC strength occur when the regions of deepwater formation in the model are forced directly, although reductions in deepwater formation in one area may be compensated by enhanced formation elsewhere. North Atlantic average surface air temperatures correlate linearly with the AMOC decline, but warming may occur in localised regions, notably over Greenland and where deepwater formation is enhanced. This brings into question the representativeness of temperature changes inferred from Greenland ice-core records.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines to what extent crops and their environment should be viewed as a coupled system. Crop impact assessments currently use climate model output offline to drive process-based crop models. However, in regions where local climate is sensitive to land surface conditions more consistent assessments may be produced with the crop model embedded within the land surface scheme of the climate model. Using a recently developed coupled crop–climate model, the sensitivity of local climate, in particular climate variability, to climatically forced variations in crop growth throughout the tropics is examined by comparing climates simulated with dynamic and prescribed seasonal growth of croplands. Interannual variations in land surface properties associated with variations in crop growth and development were found to have significant impacts on near-surface fluxes and climate; for example, growing season temperature variability was increased by up to 40% by the inclusion of dynamic crops. The impact was greatest in dry years where the response of crop growth to soil moisture deficits enhanced the associated warming via a reduction in evaporation. Parts of the Sahel, India, Brazil, and southern Africa were identified where local climate variability is sensitive to variations in crop growth, and where crop yield is sensitive to variations in surface temperature. Therefore, offline seasonal forecasting methodologies in these regions may underestimate crop yield variability. The inclusion of dynamic crops also altered the mean climate of the humid tropics, highlighting the importance of including dynamical vegetation within climate models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes), which is a vertical (1-D) integrated radiative transfer and energy balance model. The model links visible to thermal infrared radiance spectra (0.4 to 50 μm) as observed above the canopy to the fluxes of water, heat and carbon dioxide, as a function of vegetation structure, and the vertical profiles of temperature. Output of the model is the spectrum of outgoing radiation in the viewing direction and the turbulent heat fluxes, photosynthesis and chlorophyll fluorescence. A special routine is dedicated to the calculation of photosynthesis rate and chlorophyll fluorescence at the leaf level as a function of net radiation and leaf temperature. The fluorescence contributions from individual leaves are integrated over the canopy layer to calculate top-of-canopy fluorescence. The calculation of radiative transfer and the energy balance is fully integrated, allowing for feedback between leaf temperatures, leaf chlorophyll fluorescence and radiative fluxes. Leaf temperatures are calculated on the basis of energy balance closure. Model simulations were evaluated against observations reported in the literature and against data collected during field campaigns. These evaluations showed that SCOPE is able to reproduce realistic radiance spectra, directional radiance and energy balance fluxes. The model may be applied for the design of algorithms for the retrieval of evapotranspiration from optical and thermal earth observation data, for validation of existing methods to monitor vegetation functioning, to help interpret canopy fluorescence measurements, and to study the relationships between synoptic observations with diurnally integrated quantities. The model has been implemented in Matlab and has a modular design, thus allowing for great flexibility and scalability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters average soil thermal conductivity, specific beat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated) are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981-August 1990) were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R-2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R-2 -values of the testing period were between 0.87 and 0.94 at a depth of 20cm. and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means, that the model is suitable for addition to catchment scale models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology involving genetic modification of crops has the potential to make a contribution to rural poverty reduction in many developing countries. Thus far, pesticide-producing Bacillus thuringensis (Bt) varieties of cotton have been the main GM crops under cultivation in developing nations. Several studies have evaluated the farm-level performance of Bt varieties in comparison to conventional ones by estimating production technology, and have mostly found Bt technology to be very successful in raising output and/or reducing pesticide input. However, the production risk properties of this technology have not been studied, although they are likely to be important to risk-averse smallholders. This study investigates the output risk aspects of Bt technology by estimating two 'flexible risk' production function models allowing technology to independently affect the mean and higher moments of output. The first is the popular Just-Pope model and the second is a more general 'damage control' flexible risk model. The models are applied to cross-sectional data on South African smallholders, some of whom used Bt varieties. The results show no evidence that a 'risk-reduction' claim can be made for Bt technology. Indeed, there is some evidence to support the notion that the technology increases output risk, implying that simple (expected) profit computations used in past evaluations may overstate true benefits.