84 resultados para Empirical Flow Models
Resumo:
There is increasing concern about soil enrichment with K+ and subsequent potential losses following long-term application of poor quality water to agricultural land. Different models are increasingly being used for predicting or analyzing water flow and chemical transport in soils and groundwater. The convective-dispersive equation (CDE) and the convective log-normal transfer function (CLT) models were fitted to the potassium (K+) leaching data. The CDE and CLT models produced equivalent goodness of fit. Simulated breakthrough curves for a range of CaCl2 concentration based on parameters of 15 mmol l(-1) CaCl2 were characterised by an early peak position associated with higher K+ concentration as the CaCl2 concentration used in leaching experiments decreased. In another method, the parameters estimated from 15 mmol l(-1) CaCl2 solution were used for all other CaCl2 concentrations, and the best value of retardation factor (R) was optimised for each data set. A better prediction was found. With decreasing CaCl2 concentration the value of R is required to be more than that measured (except for 10 mmol l(-1) CaCl2), if the estimated parameters of 15 mmol l(-1) CaCl2 are used. The two models suffer from the fact that they need to be calibrated against a data set, and some of their parameters are not measurable and cannot be determined independently.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The level of insolvencies in the construction industry is high, when compared to other industry sectors. Given the management expertise and experience that is available to the construction industry, it seems strange that, according to the literature, the major causes of failure are lack of financial control and poor management. This indicates that with a good cash flow management, companies could be kept operating and financially healthy. It is possible to prevent failure. Although there are financial models that can be used to predict failure, they are based on company accounts, which have been shown to be an unreliable source of data. There are models available for cash flow management and forecasting and these could be used as a starting point for managers in rethinking their cash flow management practices. The research reported here has reached the stage of formulating researchable questions for an in-depth study including issues such as how contractors manage their cash flow, how payment practices can be managed without damaging others in the supply chain and the relationships between companies’ financial structures and the payment regimes to which they are subjected.
Resumo:
Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.
Resumo:
1. There is concern over the possibility of unwanted environmental change following transgene movement from genetically modified (GM) rapeseed Brassica napus to its wild and weedy relatives. 2. The aim of this research was to develop a remote sensing-assisted methodology to help quantify gene flow from crops to their wild relatives over wide areas. Emphasis was placed on locating sites of sympatry, where the frequency of gene flow is likely to be highest, and on measuring the size of rapeseed fields to allow spatially explicit modelling of wind-mediated pollen-dispersal patterns. 3. Remote sensing was used as a tool to locate rapeseed fields, and a variety of image-processing techniques was adopted to facilitate the compilation of a spatially explicit profile of sympatry between the crop and Brassica rapa. 4. Classified satellite images containing rapeseed fields were first used to infer the spatial relationship between donor rapeseed fields and recipient riverside B. rapa populations. Such images also have utility for improving the efficiency of ground surveys by identifying probable sites of sympatry. The same data were then also used for the calculation of mean field size. 5. This paper forms a companion paper to Wilkinson et al. (2003), in which these elements were combined to produce a spatially explicit profile of hybrid formation over the UK. The current paper demonstrates the value of remote sensing and image processing for large-scale studies of gene flow, and describes a generic method that could be applied to a variety of crops in many countries. 6. Synthesis and applications. The decision to approve or prevent the release of a GM cultivar is made at a national rather than regional level. It is highly desirable that data relating to the decision-making process are collected at the same scale, rather than relying on extrapolation from smaller experiments designed at the plot, field or even regional scale. It would be extremely difficult and labour intensive to attempt to carry out such large-scale investigations without the use of remote-sensing technology. This study used rapeseed in the UK as a model to demonstrate the value of remote sensing in assembling empirical information at a national level.
Resumo:
Under the Agreement on Trade-Related Aspects of Intellectual Property Rights, all member-countries of the World Trade Organization are required to provide an "effective" system of plant variety protection within a specific time frame. In many developing countries, this has led to a divisive debate about the fundamental desirability of extending intellectual property rights to agriculture. Empirical studies on the economic impacts of plant variety protection, especially its ability to generate large private sector investments in plant breeding and to facilitate the transfer of technology, have been very limited. This paper examines two aspects of the international experience of plant variety protection: (a) the relationship between legislation, research, and development expenditures and plant variety protection grants, i.e., the innovation effect and (b) the role of plant variety protection in facilitating the flow of varieties across countries, i.e., the transferability effect.
Resumo:
Models of windblown pollen or spore movement are required to predict gene flow from genetically modified (GM) crops and the spread of fungal diseases. We suggest a simple form for a function describing the distance moved by a pollen grain or fungal spore, for use in generic models of dispersal. The function has power-law behaviour over sub-continental distances. We show that air-borne dispersal of rapeseed pollen in two experiments was inconsistent with an exponential model, but was fitted by power-law models, implying a large contribution from distant fields to the catches observed. After allowance for this 'background' by applying Fourier transforms to deconvolve the mixture of distant and local sources, the data were best fit by power-laws with exponents between 1.5 and 2. We also demonstrate that for a simple model of area sources, the median dispersal distance is a function of field radius and that measurement from the source edge can be misleading. Using an inverse-square dispersal distribution deduced from the experimental data and the distribution of rapeseed fields deduced by remote sensing, we successfully predict observed rapeseed pollen density in the city centres of Derby and Leicester (UK).
Resumo:
1.There is concern over the possibility of unwanted environmental change following transgene movement from genetically modified (GM) rapeseed Brassica napus to its wild and weedy relatives. 2. The aim of this research was to develop a remote sensing-assisted methodology to help quantify gene flow from crops to their wild relatives over wide areas. Emphasis was placed on locating sites of sympatry, where the frequency of gene flow is likely to be highest, and on measuring the size of rapeseed fields to allow spatially explicit modelling of wind-mediated pollen-dispersal patterns. 3. Remote sensing was used as a tool to locate rapeseed fields, and a variety of image-processing techniques was adopted to facilitate the compilation of a spatially explicit profile of sympatry between the crop and Brassica rapa. 4. Classified satellite images containing rapeseed fields were first used to infer the spatial relationship between donor rapeseed fields and recipient riverside B. rapa populations. Such images also have utility for improving the efficiency of ground surveys by identifying probable sites of sympatry. The same data were then also used for the calculation of mean field size. 5. This paper forms a companion paper to Wilkinson et al. (2003), in which these elements were combined to produce a spatially explicit profile of hybrid formation over the UK. The current paper demonstrates the value of remote sensing and image processing for large-scale studies of gene flow, and describes a generic method that could be applied to a variety of crops in many countries. 6.Synthesis and applications. The decision to approve or prevent the release of a GM cultivar is made at a national rather than regional level. It is highly desirable that data relating to the decision-making process are collected at the same scale, rather than relying on extrapolation from smaller experiments designed at the plot, field or even regional scale. It would be extremely difficult and labour intensive to attempt to carry out such large-scale investigations without the use of remote-sensing technology. This study used rapeseed in the UK as a model to demonstrate the value of remote sensing in assembling empirical information at a national level.
Resumo:
Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing
Resumo:
The existing literature on lean construction is overwhelmingly prescriptive with little recognition of the social and politicised nature of the diffusion process. The prevailing production-engineering perspective too often assumes that organizations are unitary entities where all parties strive for the common goal of 'improved performance'. An alternative perspective is developed that considers the diffusion of lean construction across contested pluralistic arenas. Different actors mobilize different storylines to suit their own localized political agendas. Multiple storylines of lean construction continuously compete for attention with other management fashions. The conceptualization and enactment of lean construction therefore differs across contexts, often taking on different manifestations from those envisaged. However, such localized enactments of lean construction are patterned and conditioned by pre-existing social and economic structures over which individual managers have limited influence. Taking a broader view, 'leanness' can be conceptualized in terms of a quest for structural flexibility involving restructuring, downsizing and outsourcing. From this perspective, the UK construction industry can be seen to have embarked upon leaner ways of working in the mid-1970s, long before the terminology of lean thinking came into vogue. Semi-structured interviews with construction sector policy-makers provide empirical support for the view that lean construction is a multifaceted concept that defies universal definition.
Resumo:
The paper draws from three case studies of regional construction firms operating in the UK. The case studies provide new insights into the ways in which such firms strive to remain competitive. Empirical data was derived from multiple interactions with senior personnel from with each firm. Data collection methods included semi-structured interviews, informal interactions, archival research, and workshops. The initial research question was informed by existing resource-based theories of competitiveness and an extensive review of constructionspecific literature. However, subsequent emergent empirical findings progressively pointed towards the need to mobilise alternative theoretical models that emphasise localised learning and embeddedness. The findings point towards the importance of de-centralised structures that enable multiple business units to become embedded within localised markets. A significant degree of autonomy is essential to facilitate entrepreneurial behaviour. In essence, sustained competitiveness was found to rest on the way de-centralised business units enact ongoing processes of localised learning. Once local business units have become embedded within localised markets, the essential challenge is how to encourage continued entrepreneurial behaviour while maintaining some degree of centralised control and coordination. This presents a number of tensions and challenges which play out differently across each of the three case studies.
Resumo:
Several studies have highlighted the importance of the cooling period in oil absorption in deep-fat fried products. Specifically, it has been established that the largest proportion of oil which ends up into the food, is sucked into the porous crust region after the fried product is removed from the oil bath, stressing the importance of this time interval. The main objective of this paper was to develop a predictive mechanistic model that can be used to understand the principles behind post-frying cooling oil absorption kinetics, which can also help identifying the key parameters that affect the final oil intake by the fried product. The model was developed for two different geometries, an infinite slab and an infinite cylinder, and was divided into two main sub-models, one describing the immersion frying period itself and the other describing the post-frying cooling period. The immersion frying period was described by a transient moving-front model that considered the movement of the crust/core interface, whereas post-frying cooling oil absorption was considered to be a pressure driven flow mediated by capillary forces. A key element in the model was the hypothesis that oil suction would only begin once a positive pressure driving force had developed. The mechanistic model was based on measurable physical and thermal properties, and process parameters with no need of empirical data fitting, and can be used to study oil absorption in any deep-fat fried product that satisfies the assumptions made.
Resumo:
The capability of a feature model of immediate memory (Nairne, 1990; Neath, 2000) to predict and account for a relationship between absolute and proportion scoring of immediate serial recall when memory load is varied (the list-length effect, LLE) is examined. The model correctly predicts the novel finding of an LLE in immediate serial order memory similar to that observed with free recall and previously assumed to be attributable to the long-term memory component of that procedure (Glanzer, 1972). The usefulness of formal models as predictive tools and the continuity between short-term serial order and longer term item memory are considered.
Resumo:
This paper presents a hybrid control strategy integrating dynamic neural networks and feedback linearization into a predictive control scheme. Feedback linearization is an important nonlinear control technique which transforms a nonlinear system into a linear system using nonlinear transformations and a model of the plant. In this work, empirical models based on dynamic neural networks have been employed. Dynamic neural networks are mathematical structures described by differential equations, which can be trained to approximate general nonlinear systems. A case study based on a mixing process is presented.
An empirical study of process-related attributes in segmented software cost-estimation relationships
Resumo:
Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude.