172 resultados para covariance estimator
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
We consider the analysis of longitudinal data when the covariance function is modeled by additional parameters to the mean parameters. In general, inconsistent estimators of the covariance (variance/correlation) parameters will be produced when the "working" correlation matrix is misspecified, which may result in great loss of efficiency of the mean parameter estimators (albeit the consistency is preserved). We consider using different "Working" correlation models for the variance and the mean parameters. In particular, we find that an independence working model should be used for estimating the variance parameters to ensure their consistency in case the correlation structure is misspecified. The designated "working" correlation matrices should be used for estimating the mean and the correlation parameters to attain high efficiency for estimating the mean parameters. Simulation studies indicate that the proposed algorithm performs very well. We also applied different estimation procedures to a data set from a clinical trial for illustration.
Resumo:
The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.
Resumo:
Despite the best intentions of service providers and organisations, service delivery is rarely error-free. While numerous studies have investigated specific cognitive, emotional or behavioural responses to service failure and recovery, these studies do not fully capture the complexity of the services encounter. Consequently, this research develops a more holistic understanding of how specific service recovery strategies affect the responses of customers by combining two existing models—Smith & Bolton’s (2002) model of emotional responses to service performance and Fullerton and Punj’s (1993) structural model of aberrant consumer behaviour—into a conceptual framework. Specific service recovery strategies are proposed to influence consumer cognition, emotion and behaviour. This research was conducted using a 2x2 between-subjects quasi-experimental design that was administered via written survey. The experimental design manipulated two levels of two specific service recovery strategies: compensation and apology. The effect of the four recovery strategies were investigated by collecting data from 18-25 year olds and were analysed using multivariate analysis of covariance and multiple regression analysis. The results suggest that different service recovery strategies are associated with varying scores of satisfaction, perceived distributive justice, positive emotions, negative emotions and negative functional behaviour, but not dysfunctional behaviour. These finding have significant implications for the theory and practice of managing service recovery.
Resumo:
Purpose Waiting for service by customers is an important problem for many financial service marketers. Two new approaches are proposed. First, customer evaluation of the service is increased with an ambient scent. Second a cognitive variable is identified which different iates customers by the way they value time so that they can be segmented. Methodology Pretests included focus groups which highlighted financial services and a pilot test were foll owed by a main sample of 607 subjects. Structural equation modelling and multivariate analysis of covariance were used for analysis. Findings A cognitive variable, the need for time management can be used, together with demographic and customer net worth data, to segment a customer base. Two environmental interventions, music and scent, can increase customer satisfaction among customers kept waiting in a line. Research implications Two original approaches to a rapidly growing service marketing problem are identified. Practical implications Service contact points can reduce incidence of "queue rage" and enhance customer satisfaction by either or both of two simple modifications to the service environment or a preventive strategy of offering targeted customers an alternative. Originality A new method of segmentation and a new environmental intervention are proposed .
Resumo:
CFO and I/Q mismatch could cause significant performance degradation to OFDM systems. Their estimation and compensation are generally difficult as they are entangled in the received signal. In this paper, we propose some low-complexity estimation and compensation schemes in the receiver, which are robust to various CFO and I/Q mismatch values although the performance is slightly degraded for very small CFO. These schemes consist of three steps: forming a cosine estimator free of I/Q mismatch interference, estimating I/Q mismatch using the estimated cosine value, and forming a sine estimator using samples after I/Q mismatch compensation. These estimators are based on the perception that an estimate of cosine serves much better as the basis for I/Q mismatch estimation than the estimate of CFO derived from the cosine function. Simulation results show that the proposed schemes can improve system performance significantly, and they are robust to CFO and I/Q mismatch.
Resumo:
Designing and estimating civil concrete structures is a complex process which to many practitioners is tied to manual or semi-manual processes of 2D design and cannot be further improved by automated, interacting design-estimating processes. This paper presents a feasibility study for the development an automated estimator for concrete bridge design. The study offers a value proposition: an efficient automated model-based estimator can add value to the whole bridge design-estimating process, i.e., reducing estimation errors, shortening the duration of success estimates, and increasing the benefit of doing cost estimation when compared with the current practice. This is then followed by a description of what is in an efficient automated model-based estimator and how it should be used. Finally the process of model-based estimating is compared with the current practice to highlight the values embedded in the automated processes.
Resumo:
The Automated Estimator and LCADesign are two early examples of nD modelling software which both rely on the extraction of quantities from CAD models to support their further processing. The issues of building information modelling (BIM), quantity takeoff for different purposes and automating quantity takeoff are discussed by comparing the aims and use of the two programs. The technical features of the two programs are also described. The technical issues around the use of 3D models is described together with implementation issues and comments about the implementation of the IFC specifications. Some user issues that emerged through the development process are described, with a summary of the generic research tasks which are necessary to fully support the use of BIM and nD modelling.
Resumo:
The indoor air quality (IAQ) in buildings is currently assessed by measurement of pollutants during building operation for comparison with air quality standards. Current practice at the design stage tries to minimise potential indoor air quality impacts of new building materials and contents by selecting low-emission materials. However low-emission materials are not always available, and even when used the aggregated pollutant concentrations from such materials are generally overlooked. This paper presents an innovative tool for estimating indoor air pollutant concentrations at the design stage, based on emissions over time from large area building materials, furniture and office equipment. The estimator considers volatile organic compounds, formaldehyde and airborne particles from indoor materials and office equipment and the contribution of outdoor urban air pollutants affected by urban location and ventilation system filtration. The estimated pollutants are for a single, fully mixed and ventilated zone in an office building with acceptable levels derived from Australian and international health-based standards. The model acquires its dimensional data for the indoor spaces from a 3D CAD model via IFC files and the emission data from a building products/contents emissions database. This paper describes the underlying approach to estimating indoor air quality and discusses the benefits of such an approach for designers and the occupants of buildings.
Resumo:
The endeavour to obtain estimates of durability of components for use in lifecycle assessment or costing and infrastructure and maintenance planning systems is large. The factor method and the reference service life concept provide a very valuable structure, but do not resolve the central dilemma of the need to derive an extensive database of service life. Traditional methods of estimating service life, such as dose functions or degradation models, can play a role in developing this database, however the scale of the problem clearly indicates that individual dose functions cannot be derived for each component in each different local and geographic setting. Thus, a wider range of techniques is required in order to devise reference service life. This paper outlines the approaches being taken in the Cooperative Research Centre for Construction Innovation project to predict reference service life. Approaches include the development of fundamental degradation and microclimate models, the development of a situation-based reasoning ‘engine’ to vary the ‘estimator’ of service life, and the development of a database on expert performance (Delphi study). These methods should be viewed as complementary rather than as discrete alternatives. As discussed in the paper, the situation-based reasoning approach in fact has the possibility of encompassing all other methods.
Resumo:
This paper discusses the issues with sharing information between different disciplines in collaborative projects. The focus is on the information itself rather than the wider issues of collaboration. A range of projects carried out by the Cooperative Research Centre for Construction Innovation (CRC CI) in Australia is used to illustrate the issues.
Resumo:
1. Ecological data sets often use clustered measurements or use repeated sampling in a longitudinal design. Choosing the correct covariance structure is an important step in the analysis of such data, as the covariance describes the degree of similarity among the repeated observations. 2. Three methods for choosing the covariance are: the Akaike information criterion (AIC), the quasi-information criterion (QIC), and the deviance information criterion (DIC). We compared the methods using a simulation study and using a data set that explored effects of forest fragmentation on avian species richness over 15 years. 3. The overall success was 80.6% for the AIC, 29.4% for the QIC and 81.6% for the DIC. For the forest fragmentation study the AIC and DIC selected the unstructured covariance, whereas the QIC selected the simpler autoregressive covariance. Graphical diagnostics suggested that the unstructured covariance was probably correct. 4. We recommend using DIC for selecting the correct covariance structure.
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.