917 resultados para Model Construction and Estimation
Resumo:
In this paper we propose a two-component polarimetric model for soil moisture estimation on vineyards suited for C-band radar data. According to a polarimetric analysis carried out here, this scenario is made up of one dominant direct return from the soil and a multiple scattering component accounting for disturbing and nonmodeled signal fluctuations from soil and short vegetation. We propose a combined X-Bragg/Fresnel approach to characterize the polarized direct response from soil. A validation of this polarimetric model has been performed in terms of its consistency with respect to the available data both from RADARSAT-2 and from indoor measurements. High inversion rates are reported for different phenological stages of vines, and the model gives a consistent interpretation of the data as long as the volume component power remains about or below 50% of the surface contribution power. However, the scarcity of soil moisture measurements in this study prevents the validation of the algorithm in terms of the accuracy of soil moisture retrieval and an extensive campaign is required to fully demonstrate the validity of the model. Different sources of mismatches between the model and the data have been also discussed and analyzed.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.
Resumo:
The process of adsorption of two dissociating and two non-dissociating aromatic compounds from dilute aqueous solutions on an untreated commercially available activated carbon (B.D.H.) was investigated systematically. All adsorption experiments were carried out in pH controlled aqueous solutions. The experimental isotherms were fitted into four different models (Langmuir homogenous Models, Langmuir binary Model, Langmuir-Freundlich single model and Langmuir-Freundlich double model). Variation of the model parameters with the solution pH was studied and used to gain further insight into the adsorption process. The relationship between the model parameters and the solution pH and pK(a) was used to predict the adsorption capacity in molecular and ionic form of solutes in other solution. A relationship was sought to predict the effect of pH on the adsorption systems and for estimating the maximum adsorption capacity of carbon at any pH where the solute is ionized reasonably well. N-2 and CO2 adsorption were used to characterize the carbon. X-ray Photoelectron Spectroscopy (XPS) measurement was used for surface elemental analysis of the activated carbon.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Recent investigations into cross-country convergence follow Mankiw, Romer, and Weil (1992) in using a log-linear approximation to the Swan-Solow growth model to specify regressions. These studies tend to assume a common and exogenous technology. In contrast, the technology catch-up literature endogenises the growth of technology. The use of capital stock data renders the approximations and over-identification of the Mankiw model unnecessary and enables us, using dynamic panel estimation, to estimate the separate contributions of diminishing returns and technology transfer to the rate of conditional convergence. We find that both effects are important.
Resumo:
2000 Mathematics Subject Classification: 62F15.
Resumo:
This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^
Resumo:
Peer reviewed
Resumo:
Background: Body composition is affected by diseases, and affects responses to medical treatments, dosage of medicines, etc., while an abnormal body composition contributes to the causation of many chronic diseases. While we have reliable biochemical tests for certain nutritional parameters of body composition, such as iron or iodine status, and we have harnessed nuclear physics to estimate the body’s content of trace elements, the very basic quantification of body fat content and muscle mass remains highly problematic. Both body fat and muscle mass are vitally important, as they have opposing influences on chronic disease, but they have seldom been estimated as part of population health surveillance. Instead, most national surveys have merely reported BMI and waist, or sometimes the waist/hip ratio; these indices are convenient but do not have any specific biological meaning. Anthropometry offers a practical and inexpensive method for muscle and fat estimation in clinical and epidemiological settings; however, its use is imperfect due to many limitations, such as a shortage of reference data, misuse of terminology, unclear assumptions, and the absence of properly validated anthropometric equations. To date, anthropometric methods are not sensitive enough to detect muscle and fat loss. Aims: The aim of this thesis is to estimate Adipose/fat and muscle mass in health disease and during weight loss through; 1. evaluating and critiquing the literature, to identify the best-published prediction equations for adipose/fat and muscle mass estimation; 2. to derive and validate adipose tissue and muscle mass prediction equations; and 3.to evaluate the prediction equations along with anthropometric indices and the best equations retrieved from the literature in health, metabolic illness and during weight loss. Methods: a Systematic review using Cochrane Review method was used for reviewing muscle mass estimation papers that used MRI as the reference method. Fat mass estimation papers were critically reviewed. Mixed ethnic, age and body mass data that underwent whole body magnetic resonance imaging to quantify adipose tissue and muscle mass (dependent variable) and anthropometry (independent variable) were used in the derivation/validation analysis. Multiple regression and Bland-Altman plot were applied to evaluate the prediction equations. To determine how well the equations identify metabolic illness, English and Scottish health surveys were studied. Statistical analysis using multiple regression and binary logistic regression were applied to assess model fit and associations. Also, populations were divided into quintiles and relative risk was analysed. Finally, the prediction equations were evaluated by applying them to a pilot study of 10 subjects who underwent whole-body MRI, anthropometric measurements and muscle strength before and after weight loss to determine how well the equations identify adipose/fat mass and muscle mass change. Results: The estimation of fat mass has serious problems. Despite advances in technology and science, prediction equations for the estimation of fat mass depend on limited historical reference data and remain dependent upon assumptions that have not yet been properly validated for different population groups. Muscle mass does not have the same conceptual problems; however, its measurement is still problematic and reference data are scarce. The derivation and validation analysis in this thesis was satisfactory, compared to prediction equations in the literature they were similar or even better. Applying the prediction equations in metabolic illness and during weight loss presented an understanding on how well the equations identify metabolic illness showing significant associations with diabetes, hypertension, HbA1c and blood pressure. And moderate to high correlations with MRI-measured adipose tissue and muscle mass before and after weight loss. Conclusion: Adipose tissue mass and to an extent muscle mass can now be estimated for many purposes as population or groups means. However, these equations must not be used for assessing fatness and categorising individuals. Further exploration in different populations and health surveys would be valuable.
Resumo:
Lithium Ion (Li-Ion) batteries have got attention in recent decades because of their undisputable advantages over other types of batteries. They are used in so many our devices which we need in our daily life such as cell phones, lap top computers, cameras, and so many electronic devices. They also are being used in smart grids technology, stand-alone wind and solar systems, Hybrid Electric Vehicles (HEV), and Plug in Hybrid Electric Vehicles (PHEV). Despite the rapid increase in the use of Lit-ion batteries, the existence of limited battery models also inadequate and very complex models developed by chemists is the lack of useful models a significant matter. A battery management system (BMS) aims to optimize the use of the battery, making the whole system more reliable, durable and cost effective. Perhaps the most important function of the BMS is to provide an estimate of the State of Charge (SOC). SOC is the ratio of available ampere-hour (Ah) in the battery to the total Ah of a fully charged battery. The Open Circuit Voltage (OCV) of a fully relaxed battery has an approximate one-to-one relationship with the SOC. Therefore, if this voltage is known, the SOC can be found. However, the relaxed OCV can only be measured when the battery is relaxed and the internal battery chemistry has reached equilibrium. This thesis focuses on Li-ion battery cell modelling and SOC estimation. In particular, the thesis, introduces a simple but comprehensive model for the battery and a novel on-line, accurate and fast SOC estimation algorithm for the primary purpose of use in electric and hybrid-electric vehicles, and microgrid systems. The thesis aims to (i) form a baseline characterization for dynamic modeling; (ii) provide a tool for use in state-of-charge estimation. The proposed modelling and SOC estimation schemes are validated through comprehensive simulation and experimental results.
Resumo:
In the early 1990's the University of Salford was typical of most pre-1992 Universities in that whilst students provided much of it's income, little attention was paid to pedagogy. As Warren Piper (1994) observed, University teachers were professional in their subject areas but generally did not seek to acquire a pedagogy of HE. This was the case in Alsford. Courses were efficiently run but only a minority of staff were engaged in actively considering learning and teaching issues. Instead staff time was spent on research and commercial activity.----- In the mid-1990's the teaching environment began to change significantly. As well as Dearing, the advent of QAA and teaching quality reviews, Salford was already experiencing changes in the characteristics of its student body. Wideing access was on our agenda before it was so predominant nationally. With increasing numbers and heterogeneity of students as well as these external factors, new challenges were facing the University and teaching domain.----- This paper describes how a culture which values teaching, learning and pedagogic inquiry is being created in the university. It then focuses on parts of this process specific to the Faculty of Business and Informatics, namely the Faculty's Learning and Teaching Research Network and the establishment of the Centre for Construction Education in the School of Construction and Property Management.----- The Faculty of Business and Informatics' Learning and Teaching Research Network aims to raise the profile, quality and volume of pedagogic research across the five schools in the faculty. The initiative is targeted at all academics regardless of previous research experience. We hope to grow and nurture research potential where it exists and to acknowledge and use the existing expertise of subject-based researchers in collaborative ventures. We work on the principle that people are deliged to share what they know but need appreciation and feedback for doing so. A further ain is to surface and celebrate the significant amount of tacit knowledge in the area of pedagogy evidenced by the strength of student and employer feedback in many areas of the faculty's teaching.----- The Faculty embraces generic and core management expertise but also includes applied management disciplines in information systems and construction and property management where internationally leading research activities and networked centres of excellence have been established. Drawing from this experience, and within the context of the Faculty network, a Centre for Construction Education is being established with key international external partners to develop a sustainable business model of an enterprising pedagogic centre that can undertake useful research to underpin teaching in the Faculty whilst offering sustainable business services to allow it to benefit from pump-priming grant funding.----- Internal and external networking are important elements in our plans and ongoing work. Key to this are our links with the LTSN subject centres (BEST and CEBE) and the LTSN generic centre. The paper discusses networking as a concept and gives examples of practices which have proved useful in this context.----- The academic influences on our approach are also examined. Dixon’s (2000) work examining how a range of companies succeed through internal knowledge sharing has provided a range of transferable practices. We also examine the notion of dialogue in this context, defined by Ballantyne (1999) as ‘The interactive human process of reasoning together which comes into being through interactions based on spontaneity or need and is enabled by trust’ Social constructionist principles of Practical Authorship (Shotter, 1993, Pavlica, Holman and Thorpe, 1998)) have also proved useful in developing our perspective on learning and knowledge creation within our community of practice.
Resumo:
The rising problems associated with construction such as decreasing quality and productivity, labour shortages, occupational safety, and inferior working conditions have opened the possibility of more revolutionary solutions within the industry. One prospective option is in the implementation of innovative technologies such as automation and robotics, which has the potential to improve the industry in terms of productivity, safety and quality. The construction work site could, theoretically, be contained in a safer environment, with more efficient execution of the work, greater consistency of the outcome and higher level of control over the production process. By identifying the barriers to construction automation and robotics implementation in construction, and investigating ways in which to overcome them, contributions could be made in terms of better understanding and facilitating, where relevant, greater use of these technologies in the construction industry so as to promote its efficiency. This research aims to ascertain and explain the barriers to construction automation and robotics implementation by exploring and establishing the relationship between characteristics of the construction industry and attributes of existing construction automation and robotics technologies to level of usage and implementation in three selected countries; Japan, Australia and Malaysia. These three countries were chosen as their construction industry characteristics provide contrast in terms of culture, gross domestic product, technology application, organisational structure and labour policies. This research uses a mixed method approach of gathering data, both quantitative and qualitative, by employing a questionnaire survey and an interview schedule; using a wide range of sample from management through to on-site users, working in a range of small (less than AUD0.2million) to large companies (more than AUD500million), and involved in a broad range of business types and construction sectors. Detailed quantitative (statistical) and qualitative (content) data analysis is performed to provide a set of descriptions, relationships, and differences. The statistical tests selected for use include cross-tabulations, bivariate and multivariate analysis for investigating possible relationships between variables; and Kruskal-Wallis and Mann Whitney U test of independent samples for hypothesis testing and inferring the research sample to the construction industry population. Findings and conclusions arising from the research work which include the ranking schemes produced for four key areas of, the construction attributes on level of usage; barrier variables; differing levels of usage between countries; and future trends, have established a number of potential areas that could impact the level of implementation both globally and for individual countries.