42 resultados para Application techniques
Resumo:
Purpose – This paper proposes assessing the context within which integrated logistic support (ILS) can be implemented for whole life performance of building services systems. Design/methodology/approach – The use of ILS within a through-life business model (TLBM) is a better framework to achieve a well-designed, constructed and managed product. However, for ILS to be implemented in a TLBM for building services systems, the practices, tools and techniques need certain contextual prerequisites tailored to suit the construction industry. These contextual prerequisites are discussed. Findings – The case studies conducted reinforced the contextual importance of prime contracting, partnering and team collaboration for the application of ILS techniques. The lack of data was a major hindrance to the full realisation of ILS techniques within the case studies. Originality/value – The paper concludes with the recognition of the value of these contextual prerequisites for the use of ILS techniques within the building industry.
Resumo:
The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.
Resumo:
The deployment of Quality of Service (QoS) techniques involves careful analysis of area including: those business requirements; corporate strategy; and technical implementation process, which can lead to conflict or contradiction between those goals of various user groups involved in that policy definition. In addition long-term change management provides a challenge as these implementations typically require a high-skill set and experience level, which expose organisations to effects such as “hyperthymestria” [1] and “The Seven Sins of Memory”, defined by Schacter and discussed further within this paper. It is proposed that, given the information embedded within the packets of IP traffic, an opportunity exists to augment the traffic management with a machine-learning agent-based mechanism. This paper describes the process by which current policies are defined and that research required to support the development of an application which enables adaptive intelligent Quality of Service controls to augment or replace those policy-based mechanisms currently in use.
Resumo:
This study assesses the current state of adult skeletal age-at-death estimation in biological anthropology through analysis of data published in recent research articles from three major anthropological and archaeological journals (2004–2009). The most commonly used adult ageing methods, age of ‘adulthood’, age ranges and the maximum age reported for ‘mature’ adults were compared. The results showed a wide range of variability in the age at which individuals were determined to be adult (from 14 to 25 years), uneven age ranges, a lack of standardisation in the use of descriptive age categories and the inappropriate application of some ageing methods for the sample being examined. Such discrepancies make comparisons between skeletal samples difficult, while the inappropriate use of some techniques make the resultant age estimations unreliable. At a time when national and even global comparisons of past health are becoming prominent, standardisation in the terminology and age categories used to define adults within each sample is fundamental. It is hoped that this research will prompt discussions in the osteological community (both nationally and internationally) about what defines an ‘adult’, how to standardise the age ranges that we use and how individuals should be assigned to each age category. Skeletal markers have been proposed to help physically identify ‘adult’ individuals.
Resumo:
Increasingly, the microbiological scientific community is relying on molecular biology to define the complexity of the gut flora and to distinguish one organism from the next. This is particularly pertinent in the field of probiotics, and probiotic therapy, where identifying probiotics from the commensal flora is often warranted. Current techniques, including genetic fingerprinting, gene sequencing, oligonucleotide probes and specific primer selection, discriminate closely related bacteria with varying degrees of success. Additional molecular methods being employed to determine the constituents of complex microbiota in this area of research are community analysis, denaturing gradient gel electrophoresis (DGGE)/temperature gradient gel electrophoresis (TGGE), fluorescent in situ hybridisation (FISH) and probe grids. Certain approaches enable specific aetiological agents to be monitored, whereas others allow the effects of dietary intervention on bacterial populations to be studied. Other approaches demonstrate diversity, but may not always enable quantification of the population. At the heart of current molecular methods is sequence information gathered from culturable organisms. However, the diversity and novelty identified when applying these methods to the gut microflora demonstrates how little is known about this ecosystem. Of greater concern is the inherent bias associated with some molecular methods. As we understand more of the complexity and dynamics of this diverse microbiota we will be in a position to develop more robust molecular-based technologies to examine it. In addition to identification of the microbiota and discrimination of probiotic strains from commensal organisms, the future of molecular biology in the field of probiotics and the gut flora will, no doubt, stretch to investigations of functionality and activity of the microflora, and/or specific fractions. The quest will be to demonstrate the roles of probiotic strains in vivo and not simply their presence or absence.
Resumo:
This paper discusses how the use of computer-based modelling tools has aided the design of a telemetry unit for use with oil well logging. With the aid of modern computer-based simulation techniques, the new design is capable of operating at data rates of 2.5 times faster than previous designs.
Resumo:
This paper discusses the application of model reference adaptive control concepts to the automatic tuning of PID controllers. The effectiveness of the proposed method is shown through simulated applications. The gradient approach and simulated examples are provided.
Resumo:
A research has been conducted over methodological issues concerning the Theory of Planned Behaviour (TPB) by determining an appropriate measurement (direct and indirect) of constructs and selection of a plausible scaling techniques (unipolar and bipolar) of constructs: attitude, subjective norm, perceived behavioural control and intention that are important in explaining farm level tree planting in Pakistan. Unipolar scoring of beliefs showed higher correlation among the constructs of TPB than bipolar scaling technique. Both direct and indirect methods yielded significant results in explaining intention to perform farm forestry except the belief based measure of perceived behavioural control, which were analysed as statistically non-significant. A need to examine more carefully the scoring of perceived behavioural control (PBC) has been expressed
Resumo:
This paper reviews the current state of development of both near-infrared (NIR) and mid-infrared (MIR) spectroscopic techniques for process monitoring, quality control, and authenticity determination in cheese processing. Infrared spectroscopy has been identified as an ideal process analytical technology tool, and recent publications have demonstrated the potential of both NIR and MIR spectroscopy, coupled with chemometric techniques, for monitoring coagulation, syneresis, and ripening as well as determination of authenticity, composition, sensory, and rheological parameters. Recent research is reviewed and compared on the basis of experimental design, spectroscopic and chemometric methods employed to assess the potential of infrared spectroscopy as a technology for improving process control and quality in cheese manufacture. Emerging research areas for these technologies, such as cheese authenticity and food chain traceability, are also discussed.
Resumo:
In this paper we examine the order of integration of EuroSterling interest rates by employing techniques that can allow for a structural break under the null and/or alternative hypothesis of the unit-root tests. In light of these results, we investigate the cointegrating relationship implied by the single, linear expectations hypothesis of the term structure of interest rates employing two techniques, one of which allows for the possibility of a break in the mean of the cointegrating relationship. The aim of the paper is to investigate whether or not the interest rate series can be viewed as I(1) processes and furthermore, to consider whether there has been a structural break in the series. We also determine whether, if we allow for a break in the cointegration analysis, the results are consistent with those obtained when a break is not allowed for. The main results reported in this paper support the conjecture that the ‘short’ Euro-currency rates are characterised as I(1) series that exhibit a structural break on or near Black Wednesday, 16 September 1992, whereas the ‘long’ rates are I(1) series that do not support the presence of a structural break. The evidence from the cointegration analysis suggests that tests of the expectations hypothesis based on data sets that include the ERM crisis period, or a period that includes a structural break, might be problematic if the structural break is not explicitly taken into account in the testing framework.
Resumo:
Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.
Resumo:
A systematic approach is presented for obtaining cylindrical distribution functions (CDF's) of noncrystalline polymers which have been oriented by extension. The scattering patterns and CDF's are also sharpened by the method proposed by Deas and by Ruland. Data from atactic poly(methyl methacrylate) and polystyrene are analysed by these techniques. The methods could also be usefully applied to liquid crystals.
Resumo:
In this paper we explore classification techniques for ill-posed problems. Two classes are linearly separable in some Hilbert space X if they can be separated by a hyperplane. We investigate stable separability, i.e. the case where we have a positive distance between two separating hyperplanes. When the data in the space Y is generated by a compact operator A applied to the system states ∈ X, we will show that in general we do not obtain stable separability in Y even if the problem in X is stably separable. In particular, we show this for the case where a nonlinear classification is generated from a non-convergent family of linear classes in X. We apply our results to the problem of quality control of fuel cells where we classify fuel cells according to their efficiency. We can potentially classify a fuel cell using either some external measured magnetic field or some internal current. However we cannot measure the current directly since we cannot access the fuel cell in operation. The first possibility is to apply discrimination techniques directly to the measured magnetic fields. The second approach first reconstructs currents and then carries out the classification on the current distributions. We show that both approaches need regularization and that the regularized classifications are not equivalent in general. Finally, we investigate a widely used linear classification algorithm Fisher's linear discriminant with respect to its ill-posedness when applied to data generated via a compact integral operator. We show that the method cannot stay stable when the number of measurement points becomes large.
Resumo:
The potential for spatial dependence in models of voter turnout, although plausible from a theoretical perspective, has not been adequately addressed in the literature. Using recent advances in Bayesian computation, we formulate and estimate the previously unutilized spatial Durbin error model and apply this model to the question of whether spillovers and unobserved spatial dependence in voter turnout matters from an empirical perspective. Formal Bayesian model comparison techniques are employed to compare the normal linear model, the spatially lagged X model (SLX), the spatial Durbin model, and the spatial Durbin error model. The results overwhelmingly support the spatial Durbin error model as the appropriate empirical model.