933 resultados para Automatic theorem proving


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To be diagnostically useful, structural MRI must reliably distinguish Alzheimer's disease (AD) from normal aging in individual scans. Recent advances in statistical learning theory have led to the application of support vector machines to MRI for detection of a variety of disease states. The aims of this study were to assess how successfully support vector machines assigned individual diagnoses and to determine whether data-sets combined from multiple scanners and different centres could be used to obtain effective classification of scans. We used linear support vector machines to classify the grey matter segment of T1-weighted MR scans from pathologically proven AD patients and cognitively normal elderly individuals obtained from two centres with different scanning equipment. Because the clinical diagnosis of mild AD is difficult we also tested the ability of support vector machines to differentiate control scans from patients without post-mortem confirmation. Finally we sought to use these methods to differentiate scans between patients suffering from AD from those with frontotemporal lobar degeneration. Up to 96% of pathologically verified AD patients were correctly classified using whole brain images. Data from different centres were successfully combined achieving comparable results from the separate analyses. Importantly, data from one centre could be used to train a support vector machine to accurately differentiate AD and normal ageing scans obtained from another centre with different subjects and different scanner equipment. Patients with mild, clinically probable AD and age/sex matched controls were correctly separated in 89% of cases which is compatible with published diagnosis rates in the best clinical centres. This method correctly assigned 89% of patients with post-mortem confirmed diagnosis of either AD or frontotemporal lobar degeneration to their respective group. Our study leads to three conclusions: Firstly, support vector machines successfully separate patients with AD from healthy aging subjects. Secondly, they perform well in the differential diagnosis of two different forms of dementia. Thirdly, the method is robust and can be generalized across different centres. This suggests an important role for computer based diagnostic image analysis for clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New results on the theory of constrained systems are applied to characterize the generators of Noethers symmetry transformations. As a byproduct, an algorithm to construct gauge transformations in Hamiltonian formalism is derived. This is illustrated with two relevant examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We generalize the analogous of Lee Hwa Chungs theorem to the case of presymplectic manifolds. As an application, we study the canonical transformations of a canonical system (M, S, O). The role of Dirac brackets as a test of canonicity is clarified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A 6N-dimensional alternative formulation is proposed for constrained Hamiltonian systems. In this context the noninteraction theorem is derived from the world-line conditions. A model of two interacting particles is exhibited where physical coordinates are canonical.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laudisa (Found. Phys. 38:1110-1132, 2008) claims that experimental research on the class of non-local hidden-variable theories introduced by Leggett is misguided, because these theories are irrelevant for the foundations of quantum mechanics. I show that Laudisa's arguments fail to establish the pessimistic conclusion he draws from them. In particular, it is not the case that Leggett-inspired research is based on a mistaken understanding of Bell's theorem, nor that previous no-hidden-variable theorems already exclude Leggett's models. Finally, I argue that the framework of Bohmian mechanics brings out the importance of Leggett tests, rather than proving their irrelevance, as Laudisa supposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a simple closed analytical expression for the total entropy production along a single stochastic trajectory of a Brownian particle diffusing on a periodic potential under an external constant force. By numerical simulations we compute the probability distribution functions of the entropy and satisfactorily test many of the predictions based on Seiferts integral fluctuation theorem. The results presented for this simple model clearly illustrate the practical features and implications derived from such a result of nonequilibrium statistical mechanics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An offender reentry grant program funded through the Governor’s Office of Drug Control Policy supports one reentry coordinator at each of the following institutions: Mount Pleasant Correctional Facility (MPCF), Fort Dodge Correctional Facility and the Clarinda Correctional Facility. The reentry coordinators there engage in a myriad of activities, working with institution educators, counselors and medical personnel, probation/parole officers and counselors, and most importantly the offenders themselves. The program has not been in operation for very long, and only MPCF has operated long enough to be looking at outcomes. The early returns for MPCF show good promise.