30 resultados para Wiener criterion test, criterion heat, calore, Laplace


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spatial distribution of CO2 level in a classroom carried out in previous field work research has demonstrated that there is some evidence of variations in CO2 concentration in a classroom space. Significant fluctuations in CO2 concentration were found at different sampling points depending on the ventilation strategies and environmental conditions prevailing in individual classrooms. However, how these variations are affected by the emitting sources and the room air movement remains unknown. Hence, it was concluded that detailed investigation of the CO2 distribution need to be performed on a smaller scale. As a result, it was decided to use an environmental chamber with various methods and rates of ventilation, for the same internal temperature and heat loads, to study the effect of ventilation strategy and air movement on the distribution of CO2 concentration in a room. The role of human exhalation and its interaction with the plume induced by the body's convective flow and room air movement due to different ventilation strategies were studied in a chamber at the University of Reading. These phenomena are considered to be important in understanding and predicting the flow patterns in a space and how these impact on the distribution of contaminants. This paper attempts to study the CO2 dispersion and distribution at the exhalation zone of two people sitting in a chamber as well as throughout the occupied zone of the chamber. The horizontal and vertical distributions of CO2 were sampled at locations with a probability that CO2 variation is considered high. Although the room size, source location, ventilation rate and location of air supply and extract devices all can have influence on the CO2 distribution, this article gives general guidelines on the optimum positioning of CO2 sensor in a room.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study initial-boundary value problems for linear evolution equations of arbitrary spatial order, subject to arbitrary linear boundary conditions and posed on a rectangular 1-space, 1-time domain. We give a new characterisation of the boundary conditions that specify well-posed problems using Fokas' transform method. We also give a sufficient condition guaranteeing that the solution can be represented using a series. The relevant condition, the analyticity at infinity of certain meromorphic functions within particular sectors, is significantly more concrete and easier to test than the previous criterion, based on the existence of admissible functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamics of Northern Hemisphere major midwinter stratospheric sudden warmings (SSWs) are examined using transient climate change simulations from the Canadian Middle Atmosphere Model (CMAM). The simulated SSWs show good overall agreement with reanalysis data in terms of composite structure, statistics, and frequency. Using observed or model sea surface temperatures (SSTs) is found to make no significant difference to the SSWs, indicating that the use of model SSTs in the simulations extending into the future is not an issue. When SSWs are defined by the standard (wind based) definition, an absolute criterion, their frequency is found to increase by;60% by the end of this century, in conjunction with a;25% decrease in their temperature amplitude. However, when a relative criterion based on the northern annular mode index is used to define the SSWs, no future increase in frequency is found. The latter is consistent with the fact that the variance of 100-hPa daily heat flux anomalies is unaffected by climate change. The future increase in frequency of SSWs using the standard method is a result of the weakened climatological mean winds resulting from climate change, which make it easier for the SSW criterion to be met. A comparison of winters with and without SSWs reveals that the weakening of the climatological westerlies is not a result of SSWs. The Brewer–Dobson circulation is found to be stronger by ;10% during winters with SSWs, which is a value that does not change significantly in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mesoscale meteorological model (FOOT3DK) is coupled with a gas exchange model to simulate surface fluxes of CO2 and H2O under field conditions. The gas exchange model consists of a C3 single leaf photosynthesis sub-model and an extended big leaf (sun/shade) sub-model that divides the canopy into sunlit and shaded fractions. Simulated CO2 fluxes of the stand-alone version of the gas exchange model correspond well to eddy-covariance measurements at a test site in a rural area in the west of Germany. The coupled FOOT3DK/gas exchange model is validated for the diurnal cycle at singular grid points, and delivers realistic fluxes with respect to their order of magnitude and to the general daily course. Compared to the Jarvis-based big leaf scheme, simulations of latent heat fluxes with a photosynthesis-based scheme for stomatal conductance are more realistic. As expected, flux averages are strongly influenced by the underlying land cover. While the simulated net ecosystem exchange is highly correlated with leaf area index, this correlation is much weaker for the latent heat flux. Photosynthetic CO2 uptake is associated with transpirational water loss via the stomata, and the resulting opposing surface fluxes of CO2 and H2O are reproduced with the model approach. Over vegetated surfaces it is shown that the coupling of a photosynthesis-based gas exchange model with the land-surface scheme of a mesoscale model results in more realistic simulated latent heat fluxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vertical divergence of CO2 fluxes is observed over two Midwestern AmeriFlux forest sites. The differences in ensemble averaged hourly CO2 fluxes measured at two heights above canopy are relatively small (0.2–0.5 μmol m−2 s−1), but they are the major contributors to differences (76–256 g C m−2 or 41.8–50.6%) in estimated annual net ecosystem exchange (NEE) in 2001. A friction velocity criterion is used in these estimates but mean flow advection is not accounted for. This study examines the effects of coordinate rotation, averaging time period, sampling frequency and co-spectral correction on CO2 fluxes measured at a single height, and on vertical flux differences measured between two heights. Both the offset in measured vertical velocity and the downflow/upflow caused by supporting tower structures in upwind directions lead to systematic over- or under-estimates of fluxes measured at a single height. An offset of 1 cm s−1 and an upflow/downflow of 1° lead to 1% and 5.6% differences in momentum fluxes and nighttime sensible heat and CO2 fluxes, respectively, but only 0.5% and 2.8% differences in daytime sensible heat and CO2 fluxes. The sign and magnitude of both offset and upflow/downflow angle vary between sonic anemometers at two measurement heights. This introduces a systematic and large bias in vertical flux differences if these effects are not corrected in the coordinate rotation. A 1 h averaging time period is shown to be appropriate for the two sites. In the daytime, the absolute magnitudes of co-spectra decrease with height in the natural frequencies of 0.02–0.1 Hz but increase in the lower frequencies (<0.01 Hz). Thus, air motions in these two frequency ranges counteract each other in determining vertical flux differences, whose magnitude and sign vary with averaging time period. At night, co-spectral densities of CO2 are more positive at the higher levels of both sites in the frequency range of 0.03–0.4 Hz and this vertical increase is also shown at most frequencies lower than 0.03 Hz. Differences in co-spectral corrections at the two heights lead to a positive shift in vertical CO2 flux differences throughout the day at both sites. At night, the vertical CO2 flux differences between two measurement heights are 20–30% and 40–60% of co-spectral corrected CO2 fluxes measured at the lower levels of the two sites, respectively. Vertical differences of CO2 flux are relatively small in the daytime. Vertical differences in estimated mean vertical advection of CO2 between the two measurement heights generally do not improve the closure of the 1D (vertical) CO2 budget in the air layer between the two measurement heights. This may imply the significance of horizontal advection. However, a reliable assessment of mean advection contributions in annual NEE estimate at these two AmeriFlux sites is currently an unsolved problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing urban meteorological networks have an important role to play as test beds for inexpensive and more sustainable measurement techniques that are now becoming possible in our increasingly smart cities. The Birmingham Urban Climate Laboratory (BUCL) is a near-real-time, high-resolution urban meteorological network (UMN) of automatic weather stations and inexpensive, nonstandard air temperature sensors. The network has recently been implemented with an initial focus on monitoring urban heat, infrastructure, and health applications. A number of UMNs exist worldwide; however, BUCL is novel in its density, the low-cost nature of the sensors, and the use of proprietary Wi-Fi networks. This paper provides an overview of the logistical aspects of implementing a UMN test bed at such a density, including selecting appropriate urban sites; testing and calibrating low-cost, nonstandard equipment; implementing strict quality-assurance/quality-control mechanisms (including metadata); and utilizing preexisting Wi-Fi networks to transmit data. Also included are visualizations of data collected by the network, including data from the July 2013 U.K. heatwave as well as highlighting potential applications. The paper is an open invitation to use the facility as a test bed for evaluating models and/or other nonstandard observation techniques such as those generated via crowdsourcing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cold sector of a midlatitude storm is characterized by distinctive features such as strong surface heat fluxes, shallow convection, convective precipitation and synoptic subsidence. In order to evaluate the contribution of processes occurring in the cold sector to the mean climate, an appropriate indicator is needed. This study describes the systematic presence of negative potential vorticity (PV) behind the cold front of extratropical storms in winter. The origin of this negative PV is analyzed using ERA-Interim data, and PV tendencies averaged over the depth of the boundary layer are evaluated. It is found that negative PV is generated by diabatic processes in the cold sector and by Ekman pumping at the low centre, whereas positive PV is generated by Ekman advection of potential temperature in the warm sector. We suggest here that negative PV at low levels can be used to identify the cold sector. A PV-based indicator is applied to estimate the respective contributions of the cold sector and the remainder of the storm to upward motion and large-scale and convective precipitation. We compare the PV-based indicator with other distinctive features that could be used as markers of the cold sector and find that potential vorticity is the best criterion when taken alone and the best when combined with any other.