982 resultados para mathematical functions


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents an approach to cutting dynamics during turning based upon the mechanism of deformation of work material around the tool nose known as "ploughing". Starting from the shearing process in the cutting zone and accounting for "ploughing", new mathematical models relating turning force components to cutting conditions, tool geometry and tool vibration are developed. These models are developed separately for steady state and for oscillatory turning with new and worn tools. Experimental results are used to determine mathematical functions expressing the parameters introduced by the steady state model in the case of a new tool. The form of these functions are of general validity though their coefficients are dependent on work and tool materials. Good agreement is achieved between experimental and predicted forces. The model is extended on one hand to include different work material by introducing a hardness factor. The model provides good predictions when predicted forces are compared to present and published experimental results. On the other hand, the extension of the ploughing model to taming with a worn edge showed the ability of the model in predicting machining forces during steady state turning with the worn flank of the tool. In the development of the dynamic models, the dynamic turning force equations define the cutting process as being a system for which vibration of the tool tip in the feed direction is the input and measured forces are the output The model takes into account the shear plane oscillation and the cutting configuration variation in response to tool motion. Theoretical expressions of the turning forces are obtained for new and worn cutting edges. The dynamic analysis revealed the interaction between the cutting mechanism and the machine tool structure. The effect of the machine tool and tool post is accounted for by using experimental data of the transfer function of the tool post system. Steady state coefficients are corrected to include the changes in the cutting configuration with tool vibration and are used in the dynamic model. A series of oscillatory cutting tests at various conditions and various tool flank wear levels are carried out and experimental results are compared with model—predicted forces. Good agreement between predictions and experiments were achieved over a wide range of cutting conditions. This research bridges the gap between the analysis of vibration and turning forces in turning. It offers an explicit expression of the dynamic turning force generated during machining and highlights the relationships between tool wear, tool vibration and turning force. Spectral analysis of tool acceleration and turning force components led to define an "Inertance Power Ratio" as a flank wear monitoring factor. A formulation of an on—line flank wear monitoring methodology is presented and shows how the results of the present model can be applied to practical in—process tool wear monitoring in • turning operations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation introduces an integrated algorithm for a new application dedicated at discriminating between electrodes leading to a seizure onset and those that do not, using interictal subdural EEG data. The significance of this study is in determining among all of these channels, all containing interictal spikes, why some electrodes eventually lead to seizure while others do not. A first finding in the development process of the algorithm is that these interictal spikes had to be asynchronous and should be located in different regions of the brain, before any consequential interpretations of EEG behavioral patterns are possible. A singular merit of the proposed approach is that even when the EEG data is randomly selected (independent of the onset of seizure), we are able to classify those channels that lead to seizure from those that do not. It is also revealed that the region of ictal activity does not necessarily evolve from the tissue located at the channels that present interictal activity, as commonly believed.^ The study is also significant in terms of correlating clinical features of EEG with the patient's source of ictal activity, which is coming from a specific subset of channels that present interictal activity. The contributions of this dissertation emanate from (a) the choice made on the discriminating parameters used in the implementation, (b) the unique feature space that was used to optimize the delineation process of these two type of electrodes, (c) the development of back-propagation neural network that automated the decision making process, and (d) the establishment of mathematical functions that elicited the reasons for this delineation process. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Foraminiferal data were obtained from 66 samples of box cores on the southeastern Brazilian upper margin (between 23.8A degrees-25.9A degrees S and 42.8A degrees-46.13A degrees W) to evaluate the benthic foraminiferal fauna distribution and its relation to some selected abiotic parameters. We focused on areas with different primary production regimes on the southern Brazilian margin, which is generally considered as an oligotrophic region. The total density (D), richness (R), mean diversity (H) over bar', average living depth (ALD(X) ) and percentages of specimens of different microhabitats (epifauna, shallow infauna, intermediate infauna and deep infauna) were analyzed. The dominant species identified were Uvigerina spp., Globocassidulina subglobosa, Bulimina marginata, Adercotryma wrighti, Islandiella norcrossi, Rhizammina spp. and Brizalina sp.. We also established a set of mathematical functions for analyzing the vertical foraminiferal distribution patterns, providing a quantitative tool that allows correlating the microfaunal density distributions with abiotic factors. In general, the cores that fit with pure exponential decaying functions were related to the oligotrophic conditions prevalent on the Brazilian margin and to the flow of the Brazilian Current (BC). Different foraminiferal responses were identified in cores located in higher productivity zones, such as the northern and the southern region of the study area, where high percentages of infauna were encountered in these cores, and the functions used to fit these profiles differ appreciably from a pure exponential function, as a response of the significant living fauna in deeper layers of the sediment. One of the main factors supporting the different foraminiferal assemblage responses may be related to the differences in primary productivity of the water column and, consequently, in the estimated carbon flux to the sea floor. Nevertheless, also bottom water velocities, substrate type and water depth need to be considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this dissertation is to introduce Bessel functions to the reader, as well as studying some of their properties. Moreover, the final goal of this document is to present the most well- known applications of Bessel functions in physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genomic and proteomic analyses have attracted a great deal of interests in biological research in recent years. Many methods have been applied to discover useful information contained in the enormous databases of genomic sequences and amino acid sequences. The results of these investigations inspire further research in biological fields in return. These biological sequences, which may be considered as multiscale sequences, have some specific features which need further efforts to characterise using more refined methods. This project aims to study some of these biological challenges with multiscale analysis methods and stochastic modelling approach. The first part of the thesis aims to cluster some unknown proteins, and classify their families as well as their structural classes. A development in proteomic analysis is concerned with the determination of protein functions. The first step in this development is to classify proteins and predict their families. This motives us to study some unknown proteins from specific families, and to cluster them into families and structural classes. We select a large number of proteins from the same families or superfamilies, and link them to simulate some unknown large proteins from these families. We use multifractal analysis and the wavelet method to capture the characteristics of these linked proteins. The simulation results show that the method is valid for the classification of large proteins. The second part of the thesis aims to explore the relationship of proteins based on a layered comparison with their components. Many methods are based on homology of proteins because the resemblance at the protein sequence level normally indicates the similarity of functions and structures. However, some proteins may have similar functions with low sequential identity. We consider protein sequences at detail level to investigate the problem of comparison of proteins. The comparison is based on the empirical mode decomposition (EMD), and protein sequences are detected with the intrinsic mode functions. A measure of similarity is introduced with a new cross-correlation formula. The similarity results show that the EMD is useful for detection of functional relationships of proteins. The third part of the thesis aims to investigate the transcriptional regulatory network of yeast cell cycle via stochastic differential equations. As the investigation of genome-wide gene expressions has become a focus in genomic analysis, researchers have tried to understand the mechanisms of the yeast genome for many years. How cells control gene expressions still needs further investigation. We use a stochastic differential equation to model the expression profile of a target gene. We modify the model with a Gaussian membership function. For each target gene, a transcriptional rate is obtained, and the estimated transcriptional rate is also calculated with the information from five possible transcriptional regulators. Some regulators of these target genes are verified with the related references. With these results, we construct a transcriptional regulatory network for the genes from the yeast Saccharomyces cerevisiae. The construction of transcriptional regulatory network is useful for detecting more mechanisms of the yeast cell cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal design for generalized linear models has primarily focused on univariate data. Often experiments are performed that have multiple dependent responses described by regression type models, and it is of interest and of value to design the experiment for all these responses. This requires a multivariate distribution underlying a pre-chosen model for the data. Here, we consider the design of experiments for bivariate binary data which are dependent. We explore Copula functions which provide a rich and flexible class of structures to derive joint distributions for bivariate binary data. We present methods for deriving optimal experimental designs for dependent bivariate binary data using Copulas, and demonstrate that, by including the dependence between responses in the design process, more efficient parameter estimates are obtained than by the usual practice of simply designing for a single variable only. Further, we investigate the robustness of designs with respect to initial parameter estimates and Copula function, and also show the performance of compound criteria within this bivariate binary setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1980 Alltop produced a family of cubic phase sequences that nearly meet the Welch bound for maximum non-peak correlation magnitude. This family of sequences were shown by Wooters and Fields to be useful for quantum state tomography. Alltop’s construction used a function that is not planar, but whose difference function is planar. In this paper we show that Alltop type functions cannot exist in fields of characteristic 3 and that for a known class of planar functions, x^3 is the only Alltop type function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We define a pair-correlation function that can be used to characterize spatiotemporal patterning in experimental images and snapshots from discrete simulations. Unlike previous pair-correlation functions, the pair-correlation functions developed here depend on the location and size of objects. The pair-correlation function can be used to indicate complete spatial randomness, aggregation or segregation over a range of length scales, and quantifies spatial structures such as the shape, size and distribution of clusters. Comparing pair-correlation data for various experimental and simulation images illustrates their potential use as a summary statistic for calibrating discrete models of various physical processes.