847 resultados para Discrete Regression and Qualitative Choice Models
Resumo:
The high level of realism and interaction in many computer graphic applications requires techniques for processing complex geometric models. First, we present a method that provides an accurate low-resolution approximation from a multi-chart textured model that guarantees geometric fidelity and correct preservation of the appearance attributes. Then, we introduce a mesh structure called Compact Model that approximates dense triangular meshes while preserving sharp features, allowing adaptive reconstructions and supporting textured models. Next, we design a new space deformation technique called *Cages based on a multi-level system of cages that preserves the smoothness of the mesh between neighbouring cages and is extremely versatile, allowing the use of heterogeneous sets of coordinates and different levels of deformation. Finally, we propose a hybrid method that allows to apply any deformation technique on large models obtaining high quality results with a reduced memory footprint and a high performance.
Resumo:
This doctoral thesis offers a quantitative and qualitative analysis of the changes in the urban shape and landscape of the Girona Counties between 1979 and 2006. The theoretical part of the research lies within the framework of the dispersed city phenomenon, and is based on the hypothesis of convergence towards a global urban model. The empirical part demonstrates this proposition with a study of 522 zone development plans in the Girona Counties. The results point to the consolidation of the dispersed city phenomenon, as shown by the sudden increase in built-up space, the spread of urban development throughout the territory, and the emergence of a new, increasingly generic landscape comprising three major morphological types: urban extensions, low density residential estates and industrial zones. This reveals shortcomings of planning for urban growth, weakening of the city as a public project, and a certain degradation of the Mediterranean city model.
Resumo:
Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.
Resumo:
Tidal Flats are important examples of extensive areas of natural environment that remain relatively unaffected by man. Monitoring of tidal flats is required for a variety of purposes. Remote sensing has become an established technique for the measurement of topography over tidal flats. A further requirement is to measure topographic changes in order to measure sediment budgets. To date there have been few attempts to make quantitative estimates of morphological change over tidal flat areas. This paper illustrates the use of remote sensing to measure quantitative and qualitative changes in the tidal flats of Morecambe Bay during the relatively long period 1991–2007. An understanding of the patterns of sediment transport within the Bay is of considerable interest for coastal management and defence purposes. Tidal asymmetry is considered to be the dominant cause of morphological change in the Bay, with the higher currents associated with the flood tide being the main agency moulding the channel system. Quantitative changes were measured by comparing a Digital Elevation Model (DEM) of the intertidal zone formed using the waterline technique applied to satellite Synthetic Aperture Radar (SAR) images from 1991–1994, to a second DEM constructed from airborne laser altimetry data acquired in 2005. Qualitative changes were studied using additional SAR images acquired since 2003. A significant movement of sediment from below Mean Sea Level (MSL) to above MSL was detected by comparing the two Digital Elevation Models, though the proportion of this change that could be ascribed to seasonal effects was not clear. Between 1991 and 2004 there was a migration of the Ulverston channel of the river Leven north-east by about 5 km, followed by the development of a straighter channel to the west, leaving the previous channel decoupled from the river. This is thought to be due to independent tidal and fluvial forcing mechanisms acting on the channel. The results demonstrate the effectiveness of remote sensing for measurement of long-term morphological change in tidal flat areas. An alternative use of waterlines as partial bathymetry for assimilation into a morphodynamic model of the coastal zone is also discussed.
Resumo:
The farm-level success of Bt-cotton in developing countries is well documented. However, the literature has only recently begun to recognise the importance of accounting for the effects of the technology on production risk, in addition to the mean effect estimated by previous studies. The risk effects of the technology are likely very important to smallholder farmers in the developing world due to their risk-aversion. We advance the emergent literature on Bt-cotton and production risk by using panel data methods to control for possible endogeneity of Bt-adoption. We estimate two models, the first a fixed-effects version of the Just and Pope model with additive individual and time effects, and the second a variation of the model in which inputs and variety choice are allowed to affect the variance of the time effect and its correlation with the idiosyncratic error. The models are applied to panel data on smallholder cotton production in India and South Africa. Our results suggest a risk-reducing effect of Bt-cotton in India, but an inconclusive picture in South Africa.
Resumo:
We report rates of regression and associated findings in a population derived group of 255 children aged 9-14 years, participating in a prevalence study of autism spectrum disorders (ASD); 53 with narrowly defined autism, 105 with broader ASD and 97 with non-ASD neurodevelopmental problems, drawn from those with special educational needs within a population of 56,946 children. Language regression was reported in 30% with narrowly defined autism, 8% with broader ASD and less than 3% with developmental problems without ASD. A smaller group of children were identified who underwent a less clear setback. Regression was associated with higher rates of autistic symptoms and a deviation in developmental trajectory. Regression was not associated with epilepsy or gastrointestinal problems.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
A unified approach is proposed for data modelling that includes supervised regression and classification applications as well as unsupervised probability density function estimation. The orthogonal-least-squares regression based on the leave-one-out test criteria is formulated within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic data-modelling approach for constructing parsimonious kernel models with excellent generalisation capability. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.
Resumo:
We consider the classical coupled, combined-field integral equation formulations for time-harmonic acoustic scattering by a sound soft bounded obstacle. In recent work, we have proved lower and upper bounds on the $L^2$ condition numbers for these formulations, and also on the norms of the classical acoustic single- and double-layer potential operators. These bounds to some extent make explicit the dependence of condition numbers on the wave number $k$, the geometry of the scatterer, and the coupling parameter. For example, with the usual choice of coupling parameter they show that, while the condition number grows like $k^{1/3}$ as $k\to\infty$, when the scatterer is a circle or sphere, it can grow as fast as $k^{7/5}$ for a class of `trapping' obstacles. In this paper we prove further bounds, sharpening and extending our previous results. In particular we show that there exist trapping obstacles for which the condition numbers grow as fast as $\exp(\gamma k)$, for some $\gamma>0$, as $k\to\infty$ through some sequence. This result depends on exponential localisation bounds on Laplace eigenfunctions in an ellipse that we prove in the appendix. We also clarify the correct choice of coupling parameter in 2D for low $k$. In the second part of the paper we focus on the boundary element discretisation of these operators. We discuss the extent to which the bounds on the continuous operators are also satisfied by their discrete counterparts and, via numerical experiments, we provide supporting evidence for some of the theoretical results, both quantitative and asymptotic, indicating further which of the upper and lower bounds may be sharper.
Resumo:
Analyzes the use of linear and neural network models for financial distress classification, with emphasis on the issues of input variable selection and model pruning. A data-driven method for selecting input variables (financial ratios, in this case) is proposed. A case study involving 60 British firms in the period 1997-2000 is used for illustration. It is shown that the use of the Optimal Brain Damage pruning technique can considerably improve the generalization ability of a neural model. Moreover, the set of financial ratios obtained with the proposed selection procedure is shown to be an appropriate alternative to the ratios usually employed by practitioners.
Resumo:
A significant challenge in the prediction of climate change impacts on ecosystems and biodiversity is quantifying the sources of uncertainty that emerge within and between different models. Statistical species niche models have grown in popularity, yet no single best technique has been identified reflecting differing performance in different situations. Our aim was to quantify uncertainties associated with the application of 2 complimentary modelling techniques. Generalised linear mixed models (GLMM) and generalised additive mixed models (GAMM) were used to model the realised niche of ombrotrophic Sphagnum species in British peatlands. These models were then used to predict changes in Sphagnum cover between 2020 and 2050 based on projections of climate change and atmospheric deposition of nitrogen and sulphur. Over 90% of the variation in the GLMM predictions was due to niche model parameter uncertainty, dropping to 14% for the GAMM. After having covaried out other factors, average variation in predicted values of Sphagnum cover across UK peatlands was the next largest source of variation (8% for the GLMM and 86% for the GAMM). The better performance of the GAMM needs to be weighed against its tendency to overfit the training data. While our niche models are only a first approximation, we used them to undertake a preliminary evaluation of the relative importance of climate change and nitrogen and sulphur deposition and the geographic locations of the largest expected changes in Sphagnum cover. Predicted changes in cover were all small (generally <1% in an average 4 m2 unit area) but also highly uncertain. Peatlands expected to be most affected by climate change in combination with atmospheric pollution were Dartmoor, Brecon Beacons and the western Lake District.
Resumo:
Models often underestimate blocking in the Atlantic and Pacific basins and this can lead to errors in both weather and climate predictions. Horizontal resolution is often cited as the main culprit for blocking errors due to poorly resolved small-scale variability, the upscale effects of which help to maintain blocks. Although these processes are important for blocking, the authors show that much of the blocking error diagnosed using common methods of analysis and current climate models is directly attributable to the climatological bias of the model. This explains a large proportion of diagnosed blocking error in models used in the recent Intergovernmental Panel for Climate Change report. Furthermore, greatly improved statistics are obtained by diagnosing blocking using climate model data corrected to account for mean model biases. To the extent that mean biases may be corrected in low-resolution models, this suggests that such models may be able to generate greatly improved levels of atmospheric blocking.
Resumo:
The stratospheric climate and variability from simulations of sixteen chemistry‐climate models is evaluated. On average the polar night jet is well reproduced though its variability is less well reproduced with a large spread between models. Polar temperature biases are less than 5 K except in the Southern Hemisphere (SH) lower stratosphere in spring. The accumulated area of low temperatures responsible for polar stratospheric cloud formation is accurately reproduced for the Antarctic but underestimated for the Arctic. The shape and position of the polar vortex is well simulated, as is the tropical upwelling in the lower stratosphere. There is a wide model spread in the frequency of major sudden stratospheric warnings (SSWs), late biases in the breakup of the SH vortex, and a weak annual cycle in the zonal wind in the tropical upper stratosphere. Quantitatively, “metrics” indicate a wide spread in model performance for most diagnostics with systematic biases in many, and poorer performance in the SH than in the Northern Hemisphere (NH). Correlations were found in the SH between errors in the final warming, polar temperatures, the leading mode of variability, and jet strength, and in the NH between errors in polar temperatures, frequency of major SSWs, and jet strength. Models with a stronger QBO have stronger tropical upwelling and a colder NH vortex. Both the qualitative and quantitative analysis indicate a number of common and long‐standing model problems, particularly related to the simulation of the SH and stratospheric variability.
Resumo:
This study investigated the potential application of mid-infrared spectroscopy (MIR 4,000–900 cm−1) for the determination of milk coagulation properties (MCP), titratable acidity (TA), and pH in Brown Swiss milk samples (n = 1,064). Because MCP directly influence the efficiency of the cheese-making process, there is strong industrial interest in developing a rapid method for their assessment. Currently, the determination of MCP involves time-consuming laboratory-based measurements, and it is not feasible to carry out these measurements on the large numbers of milk samples associated with milk recording programs. Mid-infrared spectroscopy is an objective and nondestructive technique providing rapid real-time analysis of food compositional and quality parameters. Analysis of milk rennet coagulation time (RCT, min), curd firmness (a30, mm), TA (SH°/50 mL; SH° = Soxhlet-Henkel degree), and pH was carried out, and MIR data were recorded over the spectral range of 4,000 to 900 cm−1. Models were developed by partial least squares regression using untreated and pretreated spectra. The MCP, TA, and pH prediction models were improved by using the combined spectral ranges of 1,600 to 900 cm−1, 3,040 to 1,700 cm−1, and 4,000 to 3,470 cm−1. The root mean square errors of cross-validation for the developed models were 2.36 min (RCT, range 24.9 min), 6.86 mm (a30, range 58 mm), 0.25 SH°/50 mL (TA, range 3.58 SH°/50 mL), and 0.07 (pH, range 1.15). The most successfully predicted attributes were TA, RCT, and pH. The model for the prediction of TA provided approximate prediction (R2 = 0.66), whereas the predictive models developed for RCT and pH could discriminate between high and low values (R2 = 0.59 to 0.62). It was concluded that, although the models require further development to improve their accuracy before their application in industry, MIR spectroscopy has potential application for the assessment of RCT, TA, and pH during routine milk analysis in the dairy industry. The implementation of such models could be a means of improving MCP through phenotypic-based selection programs and to amend milk payment systems to incorporate MCP into their payment criteria.