808 resultados para Lattice-based construction
Resumo:
The construction industry has incurred a considerable amount of waste as a result of poor logistics supply chain network management. Therefore, managing logistics in the construction industry is critical. An effective logistic system ensures delivery of the right products and services to the right players at the right time while minimising costs and rewarding all sectors based on value added to the supply chain. This paper reports on an on-going research study on the concept of context-aware services delivery in the construction project supply chain logistics. As part of the emerging wireless technologies, an Intelligent Wireless Web (IWW) using context-aware computing capability represents the next generation ICT application to construction-logistics management. This intelligent system has the potential of serving and improving the construction logistics through access to context-specific data, information and services. Existing mobile communication deployments in the construction industry rely on static modes of information delivery and do not take into account the worker’s changing context and dynamic project conditions. The major problems in these applications are lack of context-specificity in the distribution of information, services and other project resources, and lack of cohesion with the existing desktop based ICT infrastructure. The research works focus on identifying the context dimension such as user context, environmental context and project context, selection of technologies to capture context-parameters such wireless sensors and RFID, selection of supporting technologies such as wireless communication, Semantic Web, Web Services, agents, etc. The process of integration of Context-Aware Computing and Web-Services to facilitate the creation of intelligent collaboration environment for managing construction logistics will take into account all the necessary critical parameters such as storage, transportation, distribution, assembly, etc. within off and on-site project.
Resumo:
Purpose-Unplanned changes in construction projects are common and lead to disruptive effects such as project delays, cost overruns and quality deviations. Rework due to unplanned changes can cost 10-15 per cent of contract value. By managing these changes more effectively, these disruptive effects can be minimised. Previous research has approached this problem from an information-processing view. In this knowledge age, the purpose of this paper is to argue that effective change management can be brought about by better understanding the significant role of knowledge during change situations. Design/methodology/approach - Within this knowledge-based context, the question of how construction project teams manage knowledge during unplanned change in the construction phase within collaborative team settings is investigated through a selected case study sample within the UK construction industry. Findings- Case study findings conclude that different forms of knowledge are created and shared between project team members during change events which is very much socially constructed and centred on tacit knowledge and experience of project personnel. Originality/value- Building on the case study findings the paper finally offers a model that represents the role of knowledge during managing project change.
Resumo:
Innovation is notoriously difficult to define and is invariably intertwined with issues of knowledge creation, continuous improvement and organisational change. An extensive literature classifies numerous types of innovation and militates against any simplistic attempt at definition. It is widely accepted that innovation is at least partly dependent upon the surrounding environment. Industry recipes and institutionally embedded practices shape the environment within which innovation occurs. Recent research directions have addressed the diffusion of innovation and its dependence upon social and institutional structures. In this respect, it is highly pertinent to compare the way that innovation is interpreted and enacted in different industrial sectors. The comparison between UK aerospace and construction is especially revealing because the two sectors are so different and therefore constitute radically different climates for innovation. Empirical research is reported based on semi-structured interviews with practitioners from both sectors. Interpretations of innovation are found to differ dramatically between aerospace and construction. Within the context of an ongoing struggle to define innovation, both industries are striving to become more innovative. The aerospace sector is found to emphasise technical innovation whereas the construction sector emphasises process innovation. An overriding cultural bias in Western economies towards technological innovation results in the common perception that aerospace is much more innovative than construction. The experienced realities of practitioners in the two sectors are much more complex.
Resumo:
Perceptual grouping is a pre-attentive process which serves to group local elements into global wholes, based on shared properties. One effect of perceptual grouping is to distort the ability to estimate the distance between two elements. In this study, biases in distance estimates, caused by four types of perceptual grouping, were measured across three tasks, a perception, a drawing and a construction task in both typical development (TD: Experiment 1) and in individuals with Williams syndrome (WS: Experiment 2). In Experiment 1, perceptual grouping distorted distance estimates across all three tasks. Interestingly, the effect of grouping by luminance was in the opposite direction to the effects of the remaining grouping types. We relate this to differences in the ability to inhibit perceptual grouping effects on distance estimates. Additive distorting influences were also observed in the drawing and the construction task, which are explained in terms of the points of reference employed in each task. Experiment 2 demonstrated that the above distortion effects are also observed in WS. Given the known deficit in the ability to use perceptual grouping in WS, this suggests a dissociation between the pre-attentive influence of and the attentive deployment of perceptual grouping in WS. The typical distortion in relation to drawing and construction points towards the presence of some typical location coding strategies in WS. The performance of the WS group differed from the TD participants on two counts. First, the pattern of overall distance estimates (averaged across interior and exterior distances) across the four perceptual grouping types, differed between groups. Second, the distorting influence of perceptual grouping was strongest for grouping by shape similarity in WS, which contrasts to a strength in grouping by proximity observed in the TD participants. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
The paper describes the implementation of an offline, low-cost Brain Computer Interface (BCI) alternative to more expensive commercial models. Using inexpensive general purpose clinical EEG acquisition hardware (Truscan32, Deymed Diagnostic) as the base unit, a synchronisation module was constructed to allow the EEG hardware to be operated precisely in time to allow for recording of automatically time stamped EEG signals. The synchronising module allows the EEG recordings to be aligned in stimulus time locked fashion for further processing by the classifier to establish the class of the stimulus, sample by sample. This allows for the acquisition of signals from the subject’s brain for the goal oriented BCI application based on the oddball paradigm. An appropriate graphical user interface (GUI) was constructed and implemented as the method to elicit the required responses (in this case Event Related Potentials or ERPs) from the subject.
Resumo:
An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
An orthogonal forward selection (OFS) algorithm based on leave-one-out (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance matrix, by minimizing the LOO statistics. For regression application, the LOO criterion is chosen to be the LOO mean-square error, while the LOO misclassification rate is adopted in two-class classification application. This OFS-LOO algorithm is computationally efficient, and it is capable of constructing parsimonious RBF networks that generalize well. Moreover, the proposed algorithm is fully automatic, and the user does not need to specify a termination criterion for the construction process. The effectiveness of the proposed RBF network construction procedure is demonstrated using examples taken from both regression and classification applications.
Resumo:
A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.
Resumo:
This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
New construction algorithms for radial basis function (RBF) network modelling are introduced based on the A-optimality and D-optimality experimental design criteria respectively. We utilize new cost functions, based on experimental design criteria, for model selection that simultaneously optimizes model approximation, parameter variance (A-optimality) or model robustness (D-optimality). The proposed approaches are based on the forward orthogonal least-squares (OLS) algorithm, such that the new A-optimality- and D-optimality-based cost functions are constructed on the basis of an orthogonalization process that gains computational advantages and hence maintains the inherent computational efficiency associated with the conventional forward OLS approach. The proposed approach enhances the very popular forward OLS-algorithm-based RBF model construction method since the resultant RBF models are constructed in a manner that the system dynamics approximation capability, model adequacy and robustness are optimized simultaneously. The numerical examples provided show significant improvement based on the D-optimality design criterion, demonstrating that there is significant room for improvement in modelling via the popular RBF neural network.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
An automatic nonlinear predictive model-construction algorithm is introduced based on forward regression and the predicted-residual-sums-of-squares (PRESS) statistic. The proposed algorithm is based on the fundamental concept of evaluating a model's generalisation capability through crossvalidation. This is achieved by using the PRESS statistic as a cost function to optimise model structure. In particular, the proposed algorithm is developed with the aim of achieving computational efficiency, such that the computational effort, which would usually be extensive in the computation of the PRESS statistic, is reduced or minimised. The computation of PRESS is simplified by avoiding a matrix inversion through the use of the orthogonalisation procedure inherent in forward regression, and is further reduced significantly by the introduction of a forward-recursive formula. Based on the properties of the PRESS statistic, the proposed algorithm can achieve a fully automated procedure without resort to any other validation data set for iterative model evaluation. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
As the learning paradigm shifts to a more personalised learning process, users need dynamic feedback from their knowledge path. Learning Management Systems (LMS) offer customised feedback dependent on questions and the answers given. However these LMSs are not designed to generate personalised feedback for an individual learner, tutor and instructional designer. This paper presents an approach for generating constructive feedback for all stakeholders during a personalised learning process. The dynamic personalised feedback model generates feedback based on the learning objectives for the Learning Object. Feedback can be generated at Learning Object level and the Information Object level for both the individual learner and the group. The group feedback is meant for the tutors and instructional designer to improve the learning process.