61 resultados para Two Approaches
Resumo:
This paper investigates the effect of Energy Performance Certificate (EPC) ratings on residential prices in Wales. Drawing on a sample of approximately 192,000 transactions, the capitalisation of energy efficiency ratings into house prices is investigated using two approaches. The first adopts a cross-sectional framework to investigate the effect of EPC rating on price. The second approach applies a repeat-sales methodology to investigate the impact of EPC rating on house price appreciation. Statistically significant positive price premiums are estimated for dwellings in EPC bands A/B (12.8%) and C (3.5%) compared to houses in band D. For dwellings in band E (−3.6%) and F (−6.5%) there are statistically significant discounts. Such effects may not be the result of energy performance alone. In addition to energy cost differences, the price effect may be due to additional benefits of energy efficient features. An analysis of the private rental segment reveals that, in contrast to the general market, low-EPC rated dwellings were not traded at a significant discount. This suggests different implicit prices of potential energy savings for landlords and owner-occupiers.
Resumo:
1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.
Resumo:
Although liquid matrix-assisted laser desorption/ionization (MALDI) has been used in mass spectrometry (MS) since the early introduction of MALDI, its substantial lack of sensitivity compared to solid (crystalline) MALDI was for a long time a major hurdle to its analytical competitiveness. In the last decade, this situation has changed with the development of new sensitive liquid matrices, which are often based on a binary matrix acid/base system. Some of these matrices were inspired by the recent progress in ionic liquid research, while others were developed from revisiting previous liquid MALDI work as well as from a combination of these two approaches. As a result, two high-performing liquid matrix classes have been developed, the ionic liquid matrices (ILMs) and the liquid support matrices (LSMs), now allowing MS measurements at a sensitivity level that is very close to the level of solid MALDI and in some cases even surpasses it. This chapter provides some basic information on a selection of highly successful representatives of these new liquid matrices and describes in detail how they are made and applied in MALDI MS analysis.
Resumo:
This article traces the paradoxical impact of Weber's oeuvre on two major scholars of nationalism, Ernest Gellner and Edward Shils. Both these scholars died in 1995, leaving behind a rich corpus of writings on the nation and nationalism, much of which was inspired by Max Weber. The paradox is that although neither scholar accepted Weber's sceptical attitude to the concept of ‘nation’, they both used his other major concepts, such as ‘rationality’, ‘disenchantment’, ‘unintended consequences’, the ‘ethic of responsibility’ and ‘charisma’, in their very analyses of the nation and nationalism. And they both saw, each in his own way, the nation and nationalism as constitutive elements of modern societies. However, the paradox ceases being a paradox if one sees the integration, by Shils and Gellner, of concepts of the nation and of nationalism in the analysis of modernity, as a development of Weber's ideas.
Resumo:
This article reflects on key methodological issues emerging from children and young people's involvement in data analysis processes. We outline a pragmatic framework illustrating different approaches to engaging children, using two case studies of children's experiences of participating in data analysis. The article highlights methods of engagement and important issues such as the balance of power between adults and children, training, support, ethical considerations, time and resources. We argue that involving children in data analysis processes can have several benefits, including enabling a greater understanding of children's perspectives and helping to prioritise children's agendas in policy and practice. (C) 2007 The Author(s). Journal compilation (C) 2007 National Children's Bureau.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 × 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture–recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture–recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.
Resumo:
Feed samples received by commercial analytical laboratories are often undefined or mixed varieties of forages, originate from various agronomic or geographical areas of the world, are mixtures (e.g., total mixed rations) and are often described incompletely or not at all. Six unified single equation approaches to predict the metabolizable energy (ME) value of feeds determined in sheep fed at maintenance ME intake were evaluated utilizing 78 individual feeds representing 17 different forages, grains, protein meals and by-product feedstuffs. The predictive approaches evaluated were two each from National Research Council [National Research Council (NRC), Nutrient Requirements of Dairy Cattle, seventh revised ed. National Academy Press, Washington, DC, USA, 2001], University of California at Davis (UC Davis) and ADAS (Stratford, UK). Slopes and intercepts for the two ADAS approaches that utilized in vitro digestibility of organic matter and either measured gross energy (GE), or a prediction of GE from component assays, and one UC Davis approach, based upon in vitro gas production and some component assays, differed from both unity and zero, respectively, while this was not the case for the two NRC and one UC Davis approach. However, within these latter three approaches, the goodness of fit (r(2)) increased from the NRC approach utilizing lignin (0.61) to the NRC approach utilizing 48 h in vitro digestion of neutral detergent fibre (NDF:0.72) and to the UC Davis approach utilizing a 30 h in vitro digestion of NDF (0.84). The reason for the difference between the precision of the NRC procedures was the failure of assayed lignin values to accurately predict 48 h in vitro digestion of NDF. However, differences among the six predictive approaches in the number of supporting assays, and their costs, as well as that the NRC approach is actually three related equations requiring categorical description of feeds (making them unsuitable for mixed feeds) while the ADAS and UC Davis approaches are single equations, suggests that the procedure of choice will vary dependent Upon local conditions, specific objectives and the feedstuffs to be evaluated. In contrast to the evaluation of the procedures among feedstuffs, no procedure was able to consistently discriminate the ME values of individual feeds within feedstuffs determined in vivo, suggesting that the quest for an accurate and precise ME predictive approach among and within feeds, may remain to be identified. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The article considers screening human populations with two screening tests. If any of the two tests is positive, then full evaluation of the disease status is undertaken; however, if both diagnostic tests are negative, then disease status remains unknown. This procedure leads to a data constellation in which, for each disease status, the 2 x 2 table associated with the two diagnostic tests used in screening has exactly one empty, unknown cell. To estimate the unobserved cell counts, previous approaches assume independence of the two diagnostic tests and use specific models, including the special mixture model of Walter or unconstrained capture-recapture estimates. Often, as is also demonstrated in this article by means of a simple test, the independence of the two screening tests is not supported by the data. Two new estimators are suggested that allow associations of the screening test, although the form of association must be assumed to be homogeneous over disease status. These estimators are modifications of the simple capture-recapture estimator and easy to construct. The estimators are investigated for several screening studies with fully evaluated disease status in which the superior behavior of the new estimators compared to the previous conventional ones can be shown. Finally, the performance of the new estimators is compared with maximum likelihood estimators, which are more difficult to obtain in these models. The results indicate the loss of efficiency as minor.
Resumo:
Diabetes incurs heavy personal and health system costs. Self-management is required if complications are to be avoided. Adolescents face particular challenges as they learn to take responsibility for their diabetes. A systematic review of educational and psychosocial programmes for adolescents with diabetes was undertaken. This aimed to: identify and categorise the types of programmes that have been evaluated; assess the cost-effectiveness of interventions; identify areas where further research is required. Sixty-two papers were identified and Subjected to a narrative review. Generic programmes focus on knowledge/skills, psychosocial issues, and behaviour/self-management. They result in modest improvements across a range of outcomes but improvements are often not sustained, suggesting a need for continuous support, possibly integrated into normal care. In-hospital education at diagnosis confers few advantages over home treatment. The greatest returns may be obtained by targeting poorly controlled individuals. Few studies addressed resourcing issues and robust cost-effectiveness appraisals are required to identify interventions that generate the greatest returns on expenditure. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In plant tissues the extracellular environment or apoplast, incorporating the cell wall, is a highly dynamic compartment with a role in many important plant processes including defence, development, signalling and assimilate partitioning. Soluble apoplast proteins from Arabidopsis thaliana, Triticum aestivum and Oryza sativa were separated by two-dimensional electrophoresis. The molecular weights and isoelectric points for the dominant proteins were established prior to excision, sequencing and identification by matrix-assisted laser-desorption ionisation time of flight mass spectrometry (MALDI - TOF MS). From the selected spots, 23 proteins from O. sativa and 25 proteins from A. thaliana were sequenced, of which nine identifications were made in O. sativa (39%) and 14 in A. thaliana (56%). This analysis revealed that: (i) patterns of proteins revealed by two-dimensional electrophoresis were different for each species indicating that speciation could occur at the level of the apoplast, (ii) of the proteins characterised many belonged to diverse families reflecting the multiple functions of the apoplast and (iii), a large number of the apoplast proteins could not be identified indicating that the majority of extracellular proteins are yet to be assigned. The principal proteins identified in the aqueous matrix of the apoplast were involved in defence, i.e. germin-like proteins or glucanases, and cell expansion, i.e. β-D-glucan glucohydrolases. This study has demonstrated that proteomic analysis can be used to resolve the apoplastic protein complement and to identify adaptive changes induced by environmental effectors.
Resumo:
There is an association between smoking and depression, yet the herbal antidepressant St John's wort (Hypericum perforatum L.: SJW) herb extract has not previously been investigated as an aid in smoking cessation. In this open, uncontrolled, pilot study, 28 smokers of 10 or more cigarettes per day for at least one year were randomised to receive SJW herb extract (LI-160) 300mg once or twice daily taken for one week before and continued for 3 months after a target quit date. In addition, all participants received motivational/behavioural support from a trained pharmacist. At 3 months, the point prevalence and continuous abstinence rates were both 18%, and at 12 months were 0%. Fifteen participants (54%) reported 23 adverse events up to the end of the 3-month follow-up period. There was no statistically significant difference in the frequency of adverse events for participants taking SJW once or twice daily (p > 0.05). Most adverse events were mild, transient and non-serious. This preliminary study has not provided convincing evidence that a SJW herb extract plus individual motivational/behavioural support is likely to be effective as an aid in smoking cessation. However, it may be premature to rule out a possible effect on the basis of a single, uncontrolled pilot study, and other approaches involving SJW extract may warrant investigation.
Resumo:
This paper presents an improved parallel Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. Motion Vectors (MV) are generated from the first-pass LHMEA and used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We used bashtable into video processing and completed parallel implementation. The hashtable structure of LHMEA is improved compared to the original TPA and LHMEA. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. The implementation contains spatial and temporal approaches. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper proposes a full interference cancellation (FIC) approach for two-path cooperative communications. Unlike the single relay schemes, the two-path cooperative scheme involves two relay nodes, so that the source can continuously transmit data to the two relays alternatively and the full bandwidth efficiency with respect to the direct transmission can be retained. The two-path relay scheme may however suffer from inter-relay interference which is caused by the simultaneous transmission of the source and one of the relays at any time. In this paper, first the inter-relay interference is expressed as a single recursive term in the received signal, and then the FIC approach is proposed to fully remove the inter-relay interference. The FIC has not only better performance but also less complexity than existing approaches. Numerical examples are also given to verify the proposed approach.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.