947 resultados para MEAN-FIELD MODELS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the second phase of the Arabian Sea Monsoon Experiment (ARMEX-II), extensive measurements of spectral aerosol optical depth, mass concentration, and mass size distribution of ambient aerosols as well as mass concentration of aerosol black carbon (BC) were made onboard a research vessel during the intermonsoon period (i.e., when the monsoon winds are in transition from northeasterlies to westerlies/ southwesterlies) over the Arabian Sea (AS) adjoining the Indian Peninsula. Simultaneous measurements of spectral aerosol optical depths (AODs) were made at different regions over the adjoining Indian landmass. Mean AODs (at 500-nm wavelength) over the ocean (similar to0.44) were comparable to those over the coastal land (similar to0.47), but were lower than the values observed over the plateau regions of central Indian Peninsula (similar to0.61). The aerosol properties were found to respond distinctly with respect to change in the trajectories, with higher optical depths and flatter AOD spectra associated with trajectories indicating advection from west Asia, and northwest and west-coastal India. On average, BC constituted only similar to2.2% to total aerosol mass compared to the climatological values of similar to6% over the coastal land during the same season. These data are used to characterize the physical properties of aerosols and to assess the resulting short-wave direct aerosol forcing. The mean values were similar to27 W m(-2) at the surface and -12 W m(-2) at the top of the atmosphere (TOA), resulting in a net atmospheric forcing of +15 W m(-2). The forcing also depended on the region from where the advection predominates. The surface and atmospheric forcing were in the range -40 to -57 W m(-2) and +27 to +39 W m(-2), respectively, corresponding to advection from the west Asian and western coastal India where they were as low as -19 and +10 W m(-2), respectively, when the advection was mainly from the Bay of Bengal and from central/peninsular India. In all these cases, the net atmospheric forcing (heating) efficiency was lower than the values reported for northern Indian Ocean during northern winter, which is attributed to the reduced BC mass fraction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study analyses European social policy as a political project that proceeds under the guidance of the European Commission. In the name of modernisation, the project aims to build a new idea for the welfare state. To understand the project, it is necessary to distance oneself from both the juridical competence of the European Union and the traditional national welfare state models. The question is about sharing problems, as well as solutions to them: it is the creation and sharing of common views, concepts and images that play a key role in European integration. Drawing on texts and speeches produced by the European Commission, the study throws light on the development of European social policy during the first years of the 2000s. The study "freeze-frames" the welfare debate having its starting points in the nation states in the name of the entity of Europe. The first article approaches the European social model as a story in itself, a preparatory, persuasive narrative that concerns the management of change. The article shows how the audience can be motivated to work towards a set target by using discursive elements in a persuasive manner: the function of a persuasive story is to convince the target audience of the appropriateness of the chosen direction and to shape their identity so that they are favourably disposed to the desired political targets. This is a kind of "intermediate state" where the story, despite its inner contradictions and inaccuracies, succeeds in appearing as an almost self-evident path towards a modern social policy that Europe is currently seen to be in need of. The second article outlines the European social model as a question of governance. Health as a sector of social policy is detached from the old political order, which was based on the welfare state, and is closely linked to economy. At the same time the population is primarily seen as an economic resource. The Commission is working towards a "Europe of Health" that grapples with the problem of governance with the help of the "healthisation" of society, healthy citizenship and health economics. The way the Commission speaks is guided by the Union's powerful interest to act as "Europe" in the field of welfare policy. At the same time, the traditional separateness of health policy is effaced in order to be able to make health policy reforms a part of the Union's wider modernisation targets. The third article then shows the European social policy as its own area of governance. The article uses an approach based on critical discourse analysis in examining the classification systems and presentation styles adopted by Commission communications, as well as the identities that they help build. In analysing the "new start" of the Lisbon strategy from the perspective of social policy, the article shows how the emphasis has shifted from the persuasive arguments for change with necessary common European targets in the early stages of the strategy towards the implementation of reforms: from a narrative to a vision and from a diagnosis to healing. The phase of global competition represents "the modern" with which European society with its culture and ways of life now has to be matched. The Lisbon strategy is a way to direct this societal change, thus building a modern European social policy. The fourth article describes how the Commission uses its communications policy to build practices and techniques of governance and how it persuades citizens to participate in the creation of a European project of change. This also requires a new kind of agency: agents for whom accountability and responsibilities mean integration into and commitment to European society. Accountability is shaped into a decisive factor in implementing the European Union's strategy of change. As such it will displace hierarchical confrontations and emphasise common action with a view to modernising Europe. However, the Union's discourse cannot be described as being a political language that would genuinely rouse and convince the audience at the level of everyday life. Keywords: European social policy, EU policy, European social model, European Commission, modernisation of welfare, welfare state, communications, discoursiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The magnetic field of the Earth is 99 % of the internal origin and generated in the outer liquid core by the dynamo principle. In the 19th century, Carl Friedrich Gauss proved that the field can be described by a sum of spherical harmonic terms. Presently, this theory is the basis of e.g. IGRF models (International Geomagnetic Reference Field), which are the most accurate description available for the geomagnetic field. In average, dipole forms 3/4 and non-dipolar terms 1/4 of the instantaneous field, but the temporal mean of the field is assumed to be a pure geocentric axial dipolar field. The validity of this GAD (Geocentric Axial Dipole) hypothesis has been estimated by using several methods. In this work, the testing rests on the frequency dependence of inclination with respect to latitude. Each combination of dipole (GAD), quadrupole (G2) and octupole (G3) produces a distinct inclination distribution. These theoretical distributions have been compared with those calculated from empirical observations from different continents, and last, from the entire globe. Only data from Precambrian rocks (over 542 million years old) has been used in this work. The basic assumption is that during the long-term course of drifting continents, the globe is sampled adequately. There were 2823 observations altogether in the paleomagnetic database of the University of Helsinki. The effect of the quality of observations, as well as the age and rocktype, has been tested. For comparison between theoretical and empirical distributions, chi-square testing has been applied. In addition, spatiotemporal binning has effectively been used to remove the errors caused by multiple observations. The modelling from igneous rock data tells that the average magnetic field of the Earth is best described by a combination of a geocentric dipole and a very weak octupole (less than 10 % of GAD). Filtering and binning gave distributions a more GAD-like appearance, but deviation from GAD increased as a function of the age of rocks. The distribution calculated from so called keypoles, the most reliable determinations, behaves almost like GAD, having a zero quadrupole and an octupole 1 % of GAD. In no earlier study, past-400-Ma rocks have given a result so close to GAD, but low inclinations have been prominent especially in the sedimentary data. Despite these results, a greater deal of high-quality data and a proof of the long-term randomness of the Earth's continental motions are needed to make sure the dipole model holds true.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, an attempt is made to study the influence of external light waves on the thermoelectric power under strong magnetic field (TPSM) in ultrathin films (UFs), quantum wires (QWs) and quantum dots (QDs) of optoelectronic materials whose unperturbed dispersion relation of the conduction electrons are defined by three and two band models of Kane together with parabolic energy bands on the basis of newly formulated electron dispersion laws in each case. We have plotted the TPSM as functions of film thickness, electron concentration, light intensity and wavelength for UFs, QWs and ODs of InSb, GaAs, Hg1-xCdxTe and In1-xGaxAsyP1-y respectively. It appears from the figures that for UFs, the TPSM increases with increasing thickness in quantum steps, decreases with increasing electron degeneracy exhibiting entirely different types of oscillations and changes with both light intensity and wavelength and these two latter types of plots are the direct signature of light waves on opto-TPSM. For QWs, the opto-TPSM exhibits rectangular oscillations with increasing thickness and shows enhanced spiky oscillations with electron concentration per unit length. For QDs, the opto-TPSM increases with increasing film thickness exhibiting trapezoidal variations which occurs during quantum jumps and the length and breadth of the trapezoids are totally dependent on energy band constants. Under the condition of non-degeneracy, the results of opto-TPSM gets simplified into the well-known form of classical TPSM equation which the function of three constants only and being invariant of the signature of band structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-Gaussianity of signals/noise often results in significant performance degradation for systems, which are designed using the Gaussian assumption. So non-Gaussian signals/noise require a different modelling and processing approach. In this paper, we discuss a new Bayesian estimation technique for non-Gaussian signals corrupted by colored non Gaussian noise. The method is based on using zero mean finite Gaussian Mixture Models (GMMs) for signal and noise. The estimation is done using an adaptive non-causal nonlinear filtering technique. The method involves deriving an estimator in terms of the GMM parameters, which are in turn estimated using the EM algorithm. The proposed filter is of finite length and offers computational feasibility. The simulations show that the proposed method gives a significant improvement compared to the linear filter for a wide variety of noise conditions, including impulsive noise. We also claim that the estimation of signal using the correlation with past and future samples leads to reduced mean squared error as compared to signal estimation based on past samples only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sea level rise is among the most worrying consequences of climate change, and the biggest uncertainty of sea level predictions lies in the future behaviour of the ice sheets of Greenland and Antarctica. In this work, a literature review is made concerning the future of the Greenland ice sheet and the effect of its melting on Baltic Sea level. The relation between sea level and ice sheets is also considered more generally from a theoretical and historical point of view. Lately, surprisingly rapid changes in the amount of ice discharging into the sea have been observed along the coastal areas of the ice sheets, and the mass deficit of Greenland and West Antarctic ice sheets which are considered vulnerable to warming has been increasing from the 1990s. The changes are probably related to atmospheric or oceanic temperature variations which affect the flow speed of ice either via meltwater penetrating to the bottom of the ice sheet or via changes in the flow resistance generated by the floating parts of an ice stream. These phenomena are assumed to increase the mass deficit of the ice sheets in the warming climate; however, there is no comprehensive theory to explain and model them. Thus, it is not yet possible to make reliable predictions of the ice sheet contribution to sea level rise. On the grounds of the historical evidence it appears that sea level can rise rather rapidly, 1 2 metres per century, even during warm climate periods. Sea level rise projections of similar magnitude have been made with so-called semiempirical methods that are based on modelling the link between sea level and global mean temperature. Such a rapid rise would require considerable acceleration of the ice sheet flow. Stronger rise appears rather unlikely, among other things because the mountainous coastline restricts ice discharge from Greenland. The upper limit of sea level rise from Greenland alone has been estimated at half a metre by the end of this century. Due to changes in the Earth s gravity field, the sea level rise caused by melting ice is not spatially uniform. Near the melting ice sheet the sea level rise is considerably smaller than the global average, whereas farther away it is slightly greater than the average. Because of this phenomenon, the effect of the Greenland ice sheet on Baltic Sea level will probably be rather small during this century, 15 cm at most. Melting of the Antarctic ice sheet is clearly more dangerous for the Baltic Sea, but also very uncertain. It is likely that the sea level predictions will become more accurate in the near future as the ice sheet models develop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. Following an earlier proposal for the origin of twist in the magnetic fields of solar active regions, we model the penetration of a wrapped up background poloidal field into a toroidal magnetic flux tube rising through the solar convective zone.Methods. The rise of the straight, cylindrical flux tube is followed by numerically solving the induction equation in a comoving Lagrangian frame, while an external poloidal magnetic field is assumed to be radially advected onto the tube with a speed corresponding to the rise velocity.Results. One prediction of our model is the existence of a ring of reverse current helicity on the periphery of active regions. On the other hand, the amplitude of the resulting twist depends sensitively on the assumed structure ( diffuse vs. concentrated/intermittent) of the active region magnetic field right before its emergence, and on the assumed vertical profile of the poloidal field. Nevertheless, in the model with the most plausible choice of assumptions a mean twist comparable to the observations results.Conclusions. Our results indicate that the contribution of this mechanism to the twist can be quite significant, and under favourable circumstances it can potentially account for most of the current helicity observed in active regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background When we are viewing natural scenes, every saccade abruptly changes both the mean luminance and the contrast structure falling on any given retinal location. Thus it would be useful if the two were independently encoded by the visual system, even when they change simultaneously. Recordings from single neurons in the cat visual system have suggested that contrast information may be quite independently represented in neural responses to simultaneous changes in contrast and luminance. Here we test to what extent this is true in human perception. Methodology/Principal Findings Small contrast stimuli were presented together with a 7-fold upward or downward step of mean luminance (between 185 and 1295 Td, corresponding to 14 and 98 cd/m2), either simultaneously or with various delays (50–800 ms). The perceived contrast of the target under the different conditions was measured with an adaptive staircase method. Over the contrast range 0.1–0.45, mainly subtractive attenuation was found. Perceived contrast decreased by 0.052±0.021 (N = 3) when target onset was simultaneous with the luminance increase. The attenuation subsided within 400 ms, and even faster after luminance decreases, where the effect was also smaller. The main results were robust against differences in target types and the size of the field over which luminance changed. Conclusions/Significance Perceived contrast is attenuated mainly by a subtractive term when coincident with a luminance change. The effect is of ecologically relevant magnitude and duration; in other words, strict contrast constancy must often fail during normal human visual behaviour. Still, the relative robustness of the contrast signal is remarkable in view of the limited dynamic response range of retinal cones. We propose a conceptual model for how early retinal signalling may allow this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence/absence data of twenty-seven forest insect taxa (e.g. Retinia resinella, Formica spp., Pissodes spp., several scolytids) and recorded environmental variation were used to investigate the applicability of modelling insect occurrence based on satellite imagery. The sampling was based on 1800 sample plots (25 m by 25 m) placed along the sides of 30 equilateral triangles (side 1 km) in a fragmented forest area (approximately 100 km2) in Evo, S Finland. The triangles were overlaid on land use maps interpreted from satellite images (Landsat TM 30 m multispectral scanner imagery 1991) and digitized geological maps. Insect occurrence was explained using either environmental variables measured in the field or those interpreted from the land use and geological maps. The fit of logistic regression models varied between species, possibly because some species may be associated with the characteristics of single trees while other species with stand characteristics. The occurrence of certain insect species at least, especially those associated with Scots pine, could be relatively accurately assessed indirectly on the basis of satellite imagery and geological maps. Models based on both remotely sensed and geological data better predicted the distribution of forest insects except in the case of Xylechinus pilosus, Dryocoetes sp. and Trypodendron lineatum, where the differences were relatively small in favour of the models based on field measurements. The number of species was related to habitat compartment size and distance from the habitat edge calculated from the land use maps, but logistic regressions suggested that other environmental variables in general masked the effect of these variables in species occurrence at the present scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis report attempts to improve the models for predicting forest stand structure for practical use, e.g. forest management planning (FMP) purposes in Finland. Comparisons were made between Weibull and Johnson s SB distribution and alternative regression estimation methods. Data used for preliminary studies was local but the final models were based on representative data. Models were validated mainly in terms of bias and RMSE in the main stand characteristics (e.g. volume) using independent data. The bivariate SBB distribution model was used to mimic realistic variations in tree dimensions by including within-diameter-class height variation. Using the traditional method, diameter distribution with the expected height resulted in reduced height variation, whereas the alternative bivariate method utilized the error-term of the height model. The lack of models for FMP was covered to some extent by the models for peatland and juvenile stands. The validation of these models showed that the more sophisticated regression estimation methods provided slightly improved accuracy. A flexible prediction and application for stand structure consisted of seemingly unrelated regression models for eight stand characteristics, the parameters of three optional distributions and Näslund s height curve. The cross-model covariance structure was used for linear prediction application, in which the expected values of the models were calibrated with the known stand characteristics. This provided a framework to validate the optional distributions and the optional set of stand characteristics. Height distribution is recommended for the earliest state of stands because of its continuous feature. From the mean height of about 4 m, Weibull dbh-frequency distribution is recommended in young stands if the input variables consist of arithmetic stand characteristics. In advanced stands, basal area-dbh distribution models are recommended. Näslund s height curve proved useful. Some efficient transformations of stand characteristics are introduced, e.g. the shape index, which combined the basal area, the stem number and the median diameter. Shape index enabled SB model for peatland stands to detect large variation in stand densities. This model also demonstrated reasonable behaviour for stands in mineral soils.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Often the soil hydraulic parameters are obtained by the inversion of measured data (e.g. soil moisture, pressure head, and cumulative infiltration, etc.). However, the inverse problem in unsaturated zone is ill-posed due to various reasons, and hence the parameters become non-unique. The presence of multiple soil layers brings the additional complexities in the inverse modelling. The generalized likelihood uncertainty estimate (GLUE) is a useful approach to estimate the parameters and their uncertainty when dealing with soil moisture dynamics which is a highly non-linear problem. Because the estimated parameters depend on the modelling scale, inverse modelling carried out on laboratory data and field data may provide independent estimates. The objective of this paper is to compare the parameters and their uncertainty estimated through experiments in the laboratory and in the field and to assess which of the soil hydraulic parameters are independent of the experiment. The first two layers in the field site are characterized by Loamy sand and Loamy. The mean soil moisture and pressure head at three depths are measured with an interval of half hour for a period of 1 week using the evaporation method for the laboratory experiment, whereas soil moisture at three different depths (60, 110, and 200 cm) is measured with an interval of 1 h for 2 years for the field experiment. A one-dimensional soil moisture model on the basis of the finite difference method was used. The calibration and validation are approximately for 1 year each. The model performance was found to be good with root mean square error (RMSE) varying from 2 to 4 cm(3) cm(-3). It is found from the two experiments that mean and uncertainty in the saturated soil moisture (theta(s)) and shape parameter (n) of van Genuchten equations are similar for both the soil types. Copyright (C) 2010 John Wiley & Sons, Ltd.