909 resultados para Automatic Inference
Resumo:
Simple formalized rules are proposed for automatic phonetic transcription of Tamil words into Roman script. These rules are syntax-directed and require a one-symbol look-ahead facility and hence easily automated in a digital computer. Some suggestions are also put forth for the linearization of Tamil script for handling these by modern machinery.
Resumo:
In this paper a nonlinear control has been designed using the dynamic inversion approach for automatic landing of unmanned aerial vehicles (UAVs), along with associated path planning. This is a difficult problem because of light weight of UAVs and strong coupling between longitudinal and lateral modes. The landing maneuver of the UAV is divided into approach, glideslope and flare. In the approach UAV aligns with the centerline of the runway by heading angle correction. In glideslope and flare the UAV follows straight line and exponential curves respectively in the pitch plane with no lateral deviations. The glideslope and flare path are scheduled as a function of approach distance from runway. The trajectory parameters are calculated such that the sink rate at touchdown remains within specified bounds. It is also ensured that the transition from the glideslope to flare path is smooth by ensuring C-1 continuity at the transition. In the outer loop, the roll rate command is generated by assuring a coordinated turn in the alignment segment and by assuring zero bank angle in the glideslope and flare segments. The pitch rate command is generated from the error in altitude to control the deviations from the landing trajectory. The yaw rate command is generated from the required heading correction. In the inner loop, the aileron, elevator and rudder deflections are computed together to track the required body rate commands. Moreover, it is also ensured that the forward velocity of the UAV at the touch down remains close to a desired value by manipulating the thrust of the vehicle. A nonlinear six-DOF model, which has been developed from extensive wind-tunnel testing, is used both for control design as well as to validate it.
Resumo:
In many parts of the world, uncontrolled fires in sparsely populated areas are a major concern as they can quickly grow into large and destructive conflagrations in short time spans. Detecting these fires has traditionally been a job for trained humans on the ground, or in the air. In many cases, these manned solutions are simply not able to survey the amount of area necessary to maintain sufficient vigilance and coverage. This paper investigates the use of unmanned aerial systems (UAS) for automated wildfire detection. The proposed system uses low-cost, consumer-grade electronics and sensors combined with various airframes to create a system suitable for automatic detection of wildfires. The system employs automatic image processing techniques to analyze captured images and autonomously detect fire-related features such as fire lines, burnt regions, and flammable material. This image recognition algorithm is designed to cope with environmental occlusions such as shadows, smoke and obstructions. Once the fire is identified and classified, it is used to initialize a spatial/temporal fire simulation. This simulation is based on occupancy maps whose fidelity can be varied to include stochastic elements, various types of vegetation, weather conditions, and unique terrain. The simulations can be used to predict the effects of optimized firefighting methods to prevent the future propagation of the fires and greatly reduce time to detection of wildfires, thereby greatly minimizing the ensuing damage. This paper also documents experimental flight tests using a SenseFly Swinglet UAS conducted in Brisbane, Australia as well as modifications for custom UAS.
Resumo:
Electricity generation is vital in developed countries to power the many mechanical and electrical devices that people require. Unfortunately electricity generation is costly. Though electricity can be generated it cannot be stored efficiently. Electricity generation is also difficult to manage because exact demand is unknown from one instant to the next. A number of services are required to manage fluctuations in electricity demand, and to protect the system when frequency falls too low. A current approach is called automatic under frequency load shedding (AUFLS). This article proposes new methods for optimising AUFLS in New Zealand’s power system. The core ideas were developed during the 2015 Maths and Industry Study Group (MISG) in Brisbane, Australia. The problem has been motivated by Transpower Limited, a company that manages New Zealand’s power system and transports bulk electricity from where it is generated to where it is needed. The approaches developed in this article can be used in electrical power systems anywhere in the world.
Resumo:
In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.
Resumo:
The business value of information technology (IT) is realized through the continuous use of IT subsequent to users’ adoption. Understanding post-adoptive IT usage is useful in realizing potential IT business value. Most previous research on post-adoptive IT usage, however, dismisses the unintentional and unconscious aspects of usage behavior. This paper advances understanding of the unintentional, unconscious, and thereby automatic usage of IT features during the post-adoptive stage. Drawing from Social Psychology literature, we argue human behaviors can be triggered by environmental cues and directed by the person’s mental goals, thereby operating without a person’s consciousness and intentional will. On this basis, we theorize the role of a user’s innovativeness goal, as the desired state of an act to innovate, in directing the user’s unintentional, unconscious, and automatic post-adoptive IT feature usage behavior. To test the hypothesized mechanisms, a human experiment employing a priming technique, is described.
Resumo:
In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.
Resumo:
Separation of printed text blocks from the non-text areas, containing signatures, handwritten text, logos and other such symbols, is a necessary first step for an OCR involving printed text recognition. In the present work, we compare the efficacy of some feature-classifier combinations to carry out this separation task. We have selected length-nomalized horizontal projection profile (HPP) as the starting point of such a separation task. This is with the assumption that the printed text blocks contain lines of text which generate HPP's with some regularity. Such an assumption is demonstrated to be valid. Our features are the HPP and its two transformed versions, namely, eigen and Fisher profiles. Four well known classifiers, namely, Nearest neighbor, Linear discriminant function, SVM's and artificial neural networks have been considered and efficiency of the combination of these classifiers with the above features is compared. A sequential floating feature selection technique has been adopted to enhance the efficiency of this separation task. The results give an average accuracy of about 96.
Resumo:
Modern sample surveys started to spread after statistician at the U.S. Bureau of the Census in the 1940s had developed a sampling design for the Current Population Survey (CPS). A significant factor was also that digital computers became available for statisticians. In the beginning of 1950s, the theory was documented in textbooks on survey sampling. This thesis is about the development of the statistical inference for sample surveys. For the first time the idea of statistical inference was enunciated by a French scientist, P. S. Laplace. In 1781, he published a plan for a partial investigation in which he determined the sample size needed to reach the desired accuracy in estimation. The plan was based on Laplace s Principle of Inverse Probability and on his derivation of the Central Limit Theorem. They were published in a memoir in 1774 which is one of the origins of statistical inference. Laplace s inference model was based on Bernoulli trials and binominal probabilities. He assumed that populations were changing constantly. It was depicted by assuming a priori distributions for parameters. Laplace s inference model dominated statistical thinking for a century. Sample selection in Laplace s investigations was purposive. In 1894 in the International Statistical Institute meeting, Norwegian Anders Kiaer presented the idea of the Representative Method to draw samples. Its idea was that the sample would be a miniature of the population. It is still prevailing. The virtues of random sampling were known but practical problems of sample selection and data collection hindered its use. Arhtur Bowley realized the potentials of Kiaer s method and in the beginning of the 20th century carried out several surveys in the UK. He also developed the theory of statistical inference for finite populations. It was based on Laplace s inference model. R. A. Fisher contributions in the 1920 s constitute a watershed in the statistical science He revolutionized the theory of statistics. In addition, he introduced a new statistical inference model which is still the prevailing paradigm. The essential idea is to draw repeatedly samples from the same population and the assumption that population parameters are constants. Fisher s theory did not include a priori probabilities. Jerzy Neyman adopted Fisher s inference model and applied it to finite populations with the difference that Neyman s inference model does not include any assumptions of the distributions of the study variables. Applying Fisher s fiducial argument he developed the theory for confidence intervals. Neyman s last contribution to survey sampling presented a theory for double sampling. This gave the central idea for statisticians at the U.S. Census Bureau to develop the complex survey design for the CPS. Important criterion was to have a method in which the costs of data collection were acceptable, and which provided approximately equal interviewer workloads, besides sufficient accuracy in estimation.
Resumo:
This paper describes a novel mimetic technique of using frequency domain approach and digital filters for automatic generation of EEG reports. Digitized EEG data files, transported on a cartridge, have been used for the analysis. The signals are filtered for alpha, beta, theta and delta bands with digital bandpass filters of fourth-order, cascaded, Butterworth, infinite impulse response (IIR) type. The maximum amplitude, mean frequency, continuity index and degree of asymmetry have been computed for a given EEG frequency band. Finally, searches for the presence of artifacts (eye movement or muscle artifacts) in the EEG records have been made.
Resumo:
Several researchers have looked into various issues related to automatic parallelization of sequential programs for multicomputers. But there is a need for a coherent framework which encompasses all these issues. In this paper we present a such a framework which takes best advantage of the multicomputer architecture. We resort to tiling transformation for iteration space partitioning and propose a scheme of automatic data partitioning and dynamic data distribution. We have tried a simple implementation of our scheme on a transputer based multicomputer [1] and the results are encouraging.
Resumo:
Formal specification is vital to the development of distributed real-time systems as these systems are inherently complex and safety-critical. It is widely acknowledged that formal specification and automatic analysis of specifications can significantly increase system reliability. Although a number of specification techniques for real-time systems have been reported in the literature, most of these formalisms do not adequately address to the constraints that the aspects of 'distribution' and 'real-time' impose on specifications. Further, an automatic verification tool is necessary to reduce human errors in the reasoning process. In this regard, this paper is an attempt towards the development of a novel executable specification language for distributed real-time systems. First, we give a precise characterization of the syntax and semantics of DL. Subsequently, we discuss the problems of model checking, automatic verification of satisfiability of DL specifications, and testing conformance of event traces with DL specifications. Effective solutions to these problems are presented as extensions to the classical first-order tableau algorithm. The use of the proposed framework is illustrated by specifying a sample problem.
Resumo:
Our ability to infer the protein quaternary structure automatically from atom and lattice information is inadequate, especially for weak complexes, and heteromeric quaternary structures. Several approaches exist, but they have limited performance. Here, we present a new scheme to infer protein quaternary structure from lattice and protein information, with all-around coverage for strong, weak and very weak affinity homomeric and heteromeric complexes. The scheme combines naive Bayes classifier and point group symmetry under Boolean framework to detect quaternary structures in crystal lattice. It consistently produces >= 90% coverage across diverse benchmarking data sets, including a notably superior 95% coverage for recognition heteromeric complexes, compared with 53% on the same data set by current state-of-the-art method. The detailed study of a limited number of prediction-failed cases offers interesting insights into the intriguing nature of protein contacts in lattice. The findings have implications for accurate inference of quaternary states of proteins, especially weak affinity complexes.
Resumo:
A practical method is proposed to identify the mode associated with the frequency part of the eigenvalue of the Floquet transition matrix (FTM). From the FTM eigenvector, which contains the states and their derivatives, the ratio of the derivative and the state corresponding to the largest component is computed. The method exploits the fact that the imaginary part of this (complex) ratio closely approximates the frequency of the mode. It also lends itself well to automation and has been tested over a large number of FTMs of order as high as 250.