846 resultados para Using an harmonic instrument
Resumo:
Research shows that poor indoor air quality (IAQ) in school buildings can cause a reduction in the students’ performance assessed by short-term computer-based tests; whereas good air quality in classrooms can enhance children's concentration and also teachers’ productivity. Investigation of air quality in classrooms helps us to characterise pollutant levels and implement corrective measures. Outdoor pollution, ventilation equipment, furnishings, and human activities affect IAQ. In school classrooms, the occupancy density is high (1.8–2.4 m2/person) compared to offices (10 m2/person). Ventilation systems expend energy and there is a trend to save energy by reducing ventilation rates. We need to establish the minimum acceptable level of fresh air required for the health of the occupants. This paper describes a project, which will aim to investigate the effect of IAQ and ventilation rates on pupils’ performance and health using psychological tests. The aim is to recommend suitable ventilation rates for classrooms and examine the suitability of the air quality guidelines for classrooms. The air quality, ventilation rates and pupils’ performance in classrooms will be evaluated in parallel measurements. In addition, Visual Analogue Scales will be used to assess subjective perception of the classroom environment and SBS symptoms. Pupil performance will be measured with Computerised Assessment Tests (CAT), and Pen and Paper Performance Tasks while physical parameters of the classroom environment will be recorded using an advanced data logging system. A total number of 20 primary schools in the Reading area are expected to participate in the present investigation, and the pupils participating in this study will be within the age group of 9–11 years. On completion of the project, based on the overall data recommendations for suitable ventilation rates for schools will be formulated.
Resumo:
Although the construction pollution index has been put forward and proved to be an efficient approach to reducing or mitigating pollution level during the construction planning stage, the problem of how to select the best construction plan based on distinguishing the degree of its potential adverse environmental impacts is still a research task. This paper first reviews environmental issues and their characteristics in construction, which are critical factors in evaluating potential adverse impacts of a construction plan. These environmental characteristics are then used to structure two decision models for environmental-conscious construction planning by using an analytic network process (ANP), including a complicated model and a simplified model. The two ANP models are combined and called the EnvironalPlanning system, which is applied to evaluate potential adverse environmental impacts of alternative construction plans.
Resumo:
This paper extends the build-operate-transfer (BOT) concession model (BOTCcM) to a new method for identifying a concession period by using bargaining-game theory. Concession period is one of the most important decision variables in arranging a BOT-type contract, and there are few methodologies available for helping to determine the value of this variable. The BOTCcM presents an alternative method by which a group of concession period solutions are produced. Nevertheless, a typical weakness in using BOTCcM is that the model cannot recommend a specific time span for concessionary. This paper introduces a new method called BOT bargaining concession model (BOTBaC) to enable the identification of a specific concession period, which takes into account the bargaining behavior of the two parties concerned in engaging a BOT contract, namely, the investor and the government concerned. The application of BOTBaC is demonstrated through using an example case.
Resumo:
Investigation of the fracture mode for hard and soft wheat endosperm was aimed at gaining a better understanding of the fragmentation process. Fracture mechanical characterization was based on the three-point bending test which enables stable crack propagation to take place in small rectangular pieces of wheat endosperm. The crack length can be measured in situ by using an optical microscope with light illumination from the side of the specimen or from the back of the specimen. Two new techniques were developed and used to estimate the fracture toughness of wheat endosperm, a geometric approach and a compliance method. The geometric approach gave average fracture toughness values of 53.10 and 27.0 J m(-2) for hard and soft endosperm, respectively. Fracture toughness estimated using the compliance method gave values of 49.9 and 29.7 J m(-2) for hard and soft endosperm, respectively. Compressive properties of the endosperm in three mutually perpendicular axes revealed that the hard and soft endosperms are isotropic composites. Scanning electron microscopy (SEM) observation of the fracture surfaces and the energy-time curves of loading-unloading cycles revealed that there was a plastic flow during crack propagation for both the hard and soft endosperms, and confirmed that the fracture mode is significantly related to the adhesion level between starch granules and the protein matrix.
Resumo:
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
In this paper we present the initial results using an artificial neural network to predict the onset of Parkinson's Disease tremors in a human subject. Data for the network was obtained from implanted deep brain electrodes. A tuned artificial neural network was shown to be able to identify the pattern of the onset tremor from these real time recordings.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
Urban surveillance footage can be of poor quality, partly due to the low quality of the camera and partly due to harsh lighting and heavily reflective scenes. For some computer surveillance tasks very simple change detection is adequate, but sometimes a more detailed change detection mask is desirable, eg, for accurately tracking identity when faced with multiple interacting individuals and in pose-based behaviour recognition. We present a novel technique for enhancing a low-quality change detection into a better segmentation using an image combing estimator in an MRF based model.
Resumo:
In this paper we consider the possibility of using an artificial neural network to accurately identify the onset of Parkinson’s Disease tremors in human subjects. Data for the network is obtained by means of deep brain implantation in the human brain. Results presented have been obtained from a practical study (i.e. real not simulated data) but should be regarded as initial trials to be discussed further. It can be seen that a tuned artificial neural network can act as an extremely effective predictor in these circumstances.
Resumo:
An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
Resumo:
This correspondence introduces a new orthogonal forward regression (OFR) model identification algorithm using D-optimality for model structure selection and is based on an M-estimators of parameter estimates. M-estimator is a classical robust parameter estimation technique to tackle bad data conditions such as outliers. Computationally, The M-estimator can be derived using an iterative reweighted least squares (IRLS) algorithm. D-optimality is a model structure robustness criterion in experimental design to tackle ill-conditioning in model Structure. The orthogonal forward regression (OFR), often based on the modified Gram-Schmidt procedure, is an efficient method incorporating structure selection and parameter estimation simultaneously. The basic idea of the proposed approach is to incorporate an IRLS inner loop into the modified Gram-Schmidt procedure. In this manner, the OFR algorithm for parsimonious model structure determination is extended to bad data conditions with improved performance via the derivation of parameter M-estimators with inherent robustness to outliers. Numerical examples are included to demonstrate the effectiveness of the proposed algorithm.
Resumo:
It is reported in the literature that distances from the observer are underestimated more in virtual environments (VEs) than in physical world conditions. On the other hand estimation of size in VEs is quite accurate and follows a size-constancy law when rich cues are present. This study investigates how estimation of distance in a CAVETM environment is affected by poor and rich cue conditions, subject experience, and environmental learning when the position of the objects is estimated using an experimental paradigm that exploits size constancy. A group of 18 healthy participants was asked to move a virtual sphere controlled using the wand joystick to the position where they thought a previously-displayed virtual cube (stimulus) had appeared. Real-size physical models of the virtual objects were also presented to the participants as a reference of real physical distance during the trials. An accurate estimation of distance implied that the participants assessed the relative size of sphere and cube correctly. The cube appeared at depths between 0.6 m and 3 m, measured along the depth direction of the CAVE. The task was carried out in two environments: a poor cue one with limited background cues, and a rich cue one with textured background surfaces. It was found that distances were underestimated in both poor and rich cue conditions, with greater underestimation in the poor cue environment. The analysis also indicated that factors such as subject experience and environmental learning were not influential. However, least square fitting of Stevens’ power law indicated a high degree of accuracy during the estimation of object locations. This accuracy was higher than in other studies which were not based on a size-estimation paradigm. Thus as indirect result, this study appears to show that accuracy when estimating egocentric distances may be increased using an experimental method that provides information on the relative size of the objects used.