980 resultados para Computer Oriented Statistics
Resumo:
Traffic Management system (TMS) comprises four major sub systems: The Network Database Management system for information to the passengers, Transit Facility Management System for service, planning, and scheduling vehicle and crews, Congestion Management System for traffic forecasting and planning, Safety Management System concerned with safety aspects of passengers and Environment. This work has opened a rather wide frame work of model structures for application on traffic. The facets of these theories are so wide that it seems impossible to present all necessary models in this work. However it could be deduced from the study that the best Traffic Management System is that whichis realistic in all aspects is easy to understand is easy to apply As it is practically difficult to device an ideal fool—proof model, the attempt here has been to make some progress-in that direction.
Resumo:
Mode of access: Internet.
Resumo:
Information can be expressed in many ways according to the different capacities of humans to perceive it. Current systems deals with multimedia, multiformat and multiplatform systems but another « multi » is still pending to guarantee global access to information, that is, multilinguality. Different languages imply different replications of the systems according to the language in question. No solutions appear to represent the bridge between the human representation (natural language) and a system-oriented representation. The United Nations University defined in 1997 a language to be the support of effective multilinguism in Internet. In this paper, we describe this language and its possible applications beyond multilingual services as the possible future standard for different language independent applications.
Resumo:
Our approach for knowledge presentation is based on the idea of expert system shell. At first we will build a graph shell of both possible dependencies and possible actions. Then, reasoning by means of Loglinear models, we will activate some nodes and some directed links. In this way a Bayesian network and networks presenting loglinear models are generated.
Resumo:
Errata sheet inserted.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
Treball de recerca que té com a principal objectiu elaborar una guia electrònica (pàgina web) de la Casa Museu Lluís Domènech i Montaner. Aquest treball està inscrit dins un aprenentatge museístic i informàtic. Serà un aprenentatge museístic per tot el que comporta la lectura de les sales i els materials que s'hi exposen, i informàtic perquè es tracta de dissenyar i elaborar un recurs electrònic que ha de permetre una presentació agradable, fàcil de progrés, harmoniosa i, alhora, concreta.
Resumo:
OBJECTIVE: To reveal the EEG correlates of resting hypofrontality in schizophrenia (SZ). METHOD: We analyzed the whole-head EEG topography in 14 patients compared to 14 matched controls by applying a new parameterization of the multichannel EEG. We used a combination of power measures tuned for regional surface mapping with power measures that allow evaluation of global effects. RESULTS: The SZ-related EEG abnormalities include i) a global decrease in absolute EEG power robustly manifested in the alpha and beta frequency bands, and ii) a relative increase in the alpha power over the prefrontal brain regions against its reduction over the posterior regions. In the alpha band both effects are linked to the SZ symptoms measured with Positive and Negative Symptom Scales and to chronicity. CONCLUSION: As alpha activity is related to regional deactivation, our findings support the concept of hypofrontality in SZ and expose the alpha rhythm as a sensitive indicator of it.
Resumo:
L'anàlisi de conglomerats o cluster és una tècnica multivariant que busca agrupar elements o variables tractant d'aconseguir la màxima homogeneïtat en cada grup i la major diferència entre ells, mitjançant una estructura jerarquitzada per poder decidir quin nivell jeràrquic és el més apropiat per establir la classificació. El programa SPSS disposa de tres tipus d'anàlisi de conglomerats: l'anàlisi de conglomerats jeràrquic, bietàpic i de K mitjanes. Aplicarem el mètode jeràrquic com el més idoni per determinar el nombre òptim de conglomerats existent en les dades i el contingut dels mateixos per al nostre cas pràctic.
Resumo:
The recent rapid development of biotechnological approaches has enabled the production of large whole genome level biological data sets. In order to handle thesedata sets, reliable and efficient automated tools and methods for data processingand result interpretation are required. Bioinformatics, as the field of studying andprocessing biological data, tries to answer this need by combining methods and approaches across computer science, statistics, mathematics and engineering to studyand process biological data. The need is also increasing for tools that can be used by the biological researchers themselves who may not have a strong statistical or computational background, which requires creating tools and pipelines with intuitive user interfaces, robust analysis workflows and strong emphasis on result reportingand visualization. Within this thesis, several data analysis tools and methods have been developed for analyzing high-throughput biological data sets. These approaches, coveringseveral aspects of high-throughput data analysis, are specifically aimed for gene expression and genotyping data although in principle they are suitable for analyzing other data types as well. Coherent handling of the data across the various data analysis steps is highly important in order to ensure robust and reliable results. Thus,robust data analysis workflows are also described, putting the developed tools andmethods into a wider context. The choice of the correct analysis method may also depend on the properties of the specific data setandthereforeguidelinesforchoosing an optimal method are given. The data analysis tools, methods and workflows developed within this thesis have been applied to several research studies, of which two representative examplesare included in the thesis. The first study focuses on spermatogenesis in murinetestis and the second one examines cell lineage specification in mouse embryonicstem cells.
Resumo:
This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition.
Resumo:
"Invited paper for the NATO Advanced Study Institute Seminar on Computer Oriented Learning Processes, Aug. 26-Sept. 7, 1974, Bonas, France."
Resumo:
For first-order classical logic a new notion of admissible substitution is defined. This notion allows optimizing the procedure of the application of quantifier rules when logical inference search is made in sequent calculi. Our objective is to show that such a computer-oriented sequent technique may be created that does not require a preliminary skolemization of initial formulas and that is efficiently comparable with methods exploiting the skolemization. Some results on its soundness and completeness are given.