85 resultados para Data replication processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the Standards for Qualified Teacher Status in England have placed new emphasis on student-teachers' ability to become integrated into the 'corporate life of the school' and to work with other professionals. Little research, however, has been carried out into how student-teachers perceive the social processes and interactions that are central to such integration during their initial teacher education school placements. This study aims to shed light on these perceptions. The data, gathered from 23 student-teachers through interviews and reflective writing, illustrate the extent to which the participants perceived such social processes as supporting or obstructing their development as teachers. Signals of inclusion, the degree of match or mismatch in students' and school colleagues' role expectations, and the social awareness of both school and student-teacher emerged as crucial factors in this respect. The student-teachers' accounts show their social interactions with school staff to be meaningful in developing their 'teacher self' and to be profoundly emotionally charged. The implications for mentor and student-teacher role preparation are discussed in this article.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Successful knowledge transfer is an important process which requires continuous improvement in today’s knowledge-intensive economy. However, improving knowledge transfer processes represents a challenge for construction practitioners due to the complexity of knowledge acquisition, codification and sharing. Although knowledge transfer is context based, understanding the critical success factors can lead to improvements in the transfer process. This paper seeks to identify and evaluate the most significant critical factors for improving knowledge transfer processes in Public Private Partnerships/Private Finance Initiatives (PPP/PFI) projects. Drawing upon a questionnaire survey of 52 construction firms located in the UK, data is analysed using Severity Index (SI) and Coefficient of Variation (COV), to examine and identify these factors in PPP/PFI schemes. The findings suggest that a supportive leadership, participation/commitment from the relevant parties, and good communication between the relevant parties are crucial to improving knowledge transfer processes in PFI schemes. Practitioners, managers and researchers can use the findings to efficiently design performance measures for analysing and improving knowledge transfer processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a controller design scheme for a priori unknown non-linear dynamical processes that are identified via an operating point neurofuzzy system from process data. Based on a neurofuzzy design and model construction algorithm (NeuDec) for a non-linear dynamical process, a neurofuzzy state-space model of controllable form is initially constructed. The control scheme based on closed-loop pole assignment is then utilized to ensure the time invariance and linearization of the state equations so that the system stability can be guaranteed under some mild assumptions, even in the presence of modelling error. The proposed approach requires a known state vector for the application of pole assignment state feedback. For this purpose, a generalized Kalman filtering algorithm with coloured noise is developed on the basis of the neurofuzzy state-space model to obtain an optimal state vector estimation. The derived controller is applied in typical output tracking problems by minimizing the tracking error. Simulation examples are included to demonstrate the operation and effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modelling of a nonlinear stochastic dynamical processes from data involves solving the problems of data gathering, preprocessing, model architecture selection, learning or adaptation, parametric evaluation and model validation. For a given model architecture such as associative memory networks, a common problem in non-linear modelling is the problem of "the curse of dimensionality". A series of complementary data based constructive identification schemes, mainly based on but not limited to an operating point dependent fuzzy models, are introduced in this paper with the aim to overcome the curse of dimensionality. These include (i) a mixture of experts algorithm based on a forward constrained regression algorithm; (ii) an inherent parsimonious delaunay input space partition based piecewise local lineal modelling concept; (iii) a neurofuzzy model constructive approach based on forward orthogonal least squares and optimal experimental design and finally (iv) the neurofuzzy model construction algorithm based on basis functions that are Bézier Bernstein polynomial functions and the additive decomposition. Illustrative examples demonstrate their applicability, showing that the final major hurdle in data based modelling has almost been removed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the role of judgement in the formation of forecasts in commercial property markets. The investigation is based on interview surveys with the majority of UK forecast producers, who are using a range of inputs and data sets to form models to predict an array of variables for a range of locations. The findings suggest that forecasts need to be acceptable to their users (and purchasers) and consequently forecasters generally have incentives to avoid presenting contentious or conspicuous forecasts. Where extreme forecasts are generated by a model, forecasters often engage in ‘self‐censorship’ or are ‘censored’ following in‐house consultation. It is concluded that the forecasting process is significantly more complex than merely carrying out econometric modelling, forecasts are mediated and contested within organisations and that impacts can vary considerably across different organizational contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we investigate the role of judgement in the formation of forecasts in commercial real estate markets. Based on interview surveys with the majority of forecast producers, we find that real estate forecasters are using a range of inputs and data sets to form models to predict an array of variables for a range of locations. The findings suggest that forecasts need to be acceptable to their users (and purchasers) and consequently forecasters generally have incentives to avoid presenting contentious or conspicuous forecasts. Where extreme forecasts are generated by a model, forecasters often engage in ‘self-censorship’ or are ‘censored’ following in-house consultation. It is concluded that the forecasting process is more complex than merely carrying out econometric modelling and that the impact of the influences within this process vary considerably across different organizational contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives exact discrete time representations for data generated by a continuous time autoregressive moving average (ARMA) system with mixed stock and flow data. The representations for systems comprised entirely of stocks or of flows are also given. In each case the discrete time representations are shown to be of ARMA form, the orders depending on those of the continuous time system. Three examples and applications are also provided, two of which concern the stationary ARMA(2, 1) model with stock variables (with applications to sunspot data and a short-term interest rate) and one concerning the nonstationary ARMA(2, 1) model with a flow variable (with an application to U.S. nondurable consumers’ expenditure). In all three examples the presence of an MA(1) component in the continuous time system has a dramatic impact on eradicating unaccounted-for serial correlation that is present in the discrete time version of the ARMA(2, 0) specification, even though the form of the discrete time model is ARMA(2, 1) for both models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of episodic drought on dissolved organic carbon (DOC) dynamics in peatlands has been the subject of considerable debate, as decomposition and DOC production is thought to increase under aerobic conditions, yet decreased DOC concentrations have been observed during drought periods. Decreased DOC solubility due to drought-induced acidification driven by sulphur (S) redox reactions has been proposed as a causal mechanism; however evidence is based on a limited number of studies carried out at a few sites. To test this hypothesis on a range of different peats, we carried out controlled drought simulation experiments on peat cores collected from six sites across Great Britain. Our data show a concurrent increase in sulphate (SO4) and a decrease in DOC across all sites during simulated water table draw-down, although the magnitude of the relationship between SO4 and DOC differed between sites. Instead, we found a consistent relationship across all sites between DOC decrease and acidification measured by the pore water acid neutralising capacity (ANC). ANC provided a more consistent measure of drought-induced acidification than SO4 alone because it accounts for differences in base cation and acid anions concentrations between sites. Rewetting resulted in rapid DOC increases without a concurrent increase in soil respiration, suggesting DOC changes were primarily controlled by soil acidity not soil biota. These results highlight the need for an integrated analysis of hydrologically driven chemical and biological processes in peatlands to improve our understanding and ability to predict the interaction between atmospheric pollution and changing climatic conditions from plot to regional and global scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ASTER Global Digital Elevation Model (GDEM) has made elevation data at 30 m spatial resolution freely available, enabling reinvestigation of morphometric relationships derived from limited field data using much larger sample sizes. These data are used to analyse a range of morphometric relationships derived for dunes (between dune height, spacing, and equivalent sand thickness) in the Namib Sand Sea, which was chosen because there are a number of extant studies that could be used for comparison with the results. The relative accuracy of GDEM for capturing dune height and shape was tested against multiple individual ASTER DEM scenes and against field surveys, highlighting the smoothing of the dune crest and resultant underestimation of dune height, and the omission of the smallest dunes, because of the 30 m sampling of ASTER DEM products. It is demonstrated that morphometric relationships derived from GDEM data are broadly comparable with relationships derived by previous methods, across a range of different dune types. The data confirm patterns of dune height, spacing and equivalent sand thickness mapped previously in the Namib Sand Sea, but add new detail to these patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate replication of the processes associated with the energetics of the tropical ocean is necessary if coupled GCMs are to simulate the physics of ENSO correctly, including the transfer of energy from the winds to the ocean thermocline and energy dissipation during the ENSO cycle. Here, we analyze ocean energetics in coupled GCMs in terms of two integral parameters describing net energy loss in the system using the approach recently proposed by Brown and Fedorov (J Clim 23:1563–1580, 2010a) and Fedorov (J Clim 20:1108–1117, 2007). These parameters are (1) the efficiency c of the conversion of wind power into the buoyancy power that controls the rate of change of the available potential energy (APE) in the ocean and (2) the e-folding rate a that characterizes the damping of APE by turbulent diffusion and other processes. Estimating these two parameters for coupled models reveals potential deficiencies (and large differences) in how state-of-the-art coupled GCMs reproduce the ocean energetics as compared to ocean-only models and data assimilating models. The majority of the coupled models we analyzed show a lower efficiency (values of c in the range of 10–50% versus 50–60% for ocean-only simulations or reanalysis) and a relatively strong energy damping (values of a-1 in the range 0.4–1 years versus 0.9–1.2 years). These differences in the model energetics appear to reflect differences in the simulated thermal structure of the tropical ocean, the structure of ocean equatorial currents, and deficiencies in the way coupled models simulate ENSO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an approach for dealing with coarse-resolution Earth observations (EO) in terrestrial ecosystem data assimilation schemes. The use of coarse-scale observations in ecological data assimilation schemes is complicated by spatial heterogeneity and nonlinear processes in natural ecosystems. If these complications are not appropriately dealt with, then the data assimilation will produce biased results. The “disaggregation” approach that we describe in this paper combines frequent coarse-resolution observations with temporally sparse fine-resolution measurements. We demonstrate the approach using a demonstration data set based on measurements of an Arctic ecosystem. In this example, normalized difference vegetation index observations are assimilated into a “zero-order” model of leaf area index and carbon uptake. The disaggregation approach conserves key ecosystem characteristics regardless of the observation resolution and estimates the carbon uptake to within 1% of the demonstration data set “truth.” Assimilating the same data in the normal manner, but without the disaggregation approach, results in carbon uptake being underestimated by 58% at an observation resolution of 250 m. The disaggregation method allows the combination of multiresolution EO and improves in spatial resolution if observations are located on a grid that shifts from one observation time to the next. Additionally, the approach is not tied to a particular data assimilation scheme, model, or EO product and can cope with complex observation distributions, as it makes no implicit assumptions of normality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.