961 resultados para Trigonometry Formulas
Resumo:
In this paper we continue the development of the differential calculus started in Aragona et al. (Monatsh. Math. 144: 13-29, 2005). Guided by the so-called sharp topology and the interpretation of Colombeau generalized functions as point functions on generalized point sets, we introduce the notion of membranes and extend the definition of integrals, given in Aragona et al. (Monatsh. Math. 144: 13-29, 2005), to integrals defined on membranes. We use this to prove a generalized version of the Cauchy formula and to obtain the Goursat Theorem for generalized holomorphic functions. A number of results from classical differential and integral calculus, like the inverse and implicit function theorems and Green's theorem, are transferred to the generalized setting. Further, we indicate that solution formulas for transport and wave equations with generalized initial data can be obtained as well.
Resumo:
In this paper, we proposed a new three-parameter long-term lifetime distribution induced by a latent complementary risk framework with decreasing, increasing and unimodal hazard function, the long-term complementary exponential geometric distribution. The new distribution arises from latent competing risk scenarios, where the lifetime associated scenario, with a particular risk, is not observable, rather we observe only the maximum lifetime value among all risks, and the presence of long-term survival. The properties of the proposed distribution are discussed, including its probability density function and explicit algebraic formulas for its reliability, hazard and quantile functions and order statistics. The parameter estimation is based on the usual maximum-likelihood approach. A simulation study assesses the performance of the estimation procedure. We compare the new distribution with its particular cases, as well as with the long-term Weibull distribution on three real data sets, observing its potential and competitiveness in comparison with some usual long-term lifetime distributions.
Resumo:
The method of steepest descent is used to study the integral kernel of a family of normal random matrix ensembles with eigenvalue distribution P-N (z(1), ... , z(N)) = Z(N)(-1)e(-N)Sigma(N)(i=1) V-alpha(z(i)) Pi(1 <= i<j <= N) vertical bar z(i) - z(j)vertical bar(2), where V-alpha(z) = vertical bar z vertical bar(alpha), z epsilon C and alpha epsilon inverted left perpendicular0, infinity inverted right perpendicular. Asymptotic formulas with error estimate on sectors are obtained. A corollary of these expansions is a scaling limit for the n-point function in terms of the integral kernel for the classical Segal-Bargmann space. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.3688293]
Resumo:
Many findings from research as well as reports from teachers describe students' problem solving strategies as manipulation of formulas by rote. The resulting dissatisfaction with quantitative physical textbook problems seems to influence the attitude towards the role of mathematics in physics education in general. Mathematics is often seen as a tool for calculation which hinders a conceptual understanding of physical principles. However, the role of mathematics cannot be reduced to this technical aspect. Hence, instead of putting mathematics away we delve into the nature of physical science to reveal the strong conceptual relationship between mathematics and physics. Moreover, we suggest that, for both prospective teaching and further research, a focus on deeply exploring such interdependency can significantly improve the understanding of physics. To provide a suitable basis, we develop a new model which can be used for analysing different levels of mathematical reasoning within physics. It is also a guideline for shifting the attention from technical to structural mathematical skills while teaching physics. We demonstrate its applicability for analysing physical-mathematical reasoning processes with an example.
Resumo:
The concept of effective population size (N(e)) is an important measure of representativeness in many areas. In this research, we consider the statistical properties of the number of contributed gametes under practical situations by adapting Crow and Denninston's (1988) N(e) formulas for dioecious species. Three sampling procedures were considered. In all circumstances, results show that as the offspring sex ratio (r) deviates from 0.5, N(e) values become smaller, and the efficiency of gametic control for increasing N(e) is reduced. For finite populations, where all individuals are potentially functional parents, the reduction in N(e) due to an unequal sex ratio can be compensated for through female gametic control when 0.28 <= r <= 0.72. This outcome is important when r is unknown. When only a fraction of the individuals in a population is taken for reproduction, N(e) is meaningful only if the size of the reference population is clearly defined. Gametic control is a compensating factor in accession regeneration when the viability of the accession is around 70 or 75%. For germ-plasm collection, when parents are a very small fraction of the population, maximum N(e) will be approximately 47 and 57% of the total number of offspring sampled, with female gametic control, r varying between 0.3 and 0.5, and being constant over generations.
Resumo:
There is no consensus regarding the accuracy of bioimpedance for the determination of body composition in older persons. This study aimed to compare the assessment of lean body mass of healthy older volunteers obtained by the deuterium dilution method (reference) with those obtained by two frequently used bioelectrical impedance formulas and one formula specifically developed for a Latin-American population. A cross-sectional study. Twenty one volunteers were studied, 12 women, with mean age 72 +/- 6.7 years. Urban community, Ribeiro Preto, Brazil. Fat free mass was determined, simultaneously, by the deuterium dilution method and bioelectrical impedance; results were compared. In bioelectrical impedance, body composition was calculated by the formulas of Deuremberg, Lukaski and Bolonchuck and Valencia et al. Lean body mass of the studied volunteers, as determined by bioelectrical impedance was 37.8 +/- 9.2 kg by the application of the Lukaski e Bolonchuk formula, 37.4 +/- 9.3 kg (Deuremberg) and 43.2 +/- 8.9 kg (Valencia et. al.). The results were significantly correlated to those obtained by the deuterium dilution method (41.6 +/- 9.3 Kg), with r=0.963, 0.932 and 0.971, respectively. Lean body mass obtained by the Valencia formula was the most accurate. In this study, lean body mass of older persons obtained by the bioelectrical impedance method showed good correlation with the values obtained by the deuterium dilution method. The formula of Valencia et al., developed for a Latin-American population, showed the best accuracy.
Resumo:
In the CP-violating Minimal Supersymmetric Standard Model, we study the production of a neutralino-chargino pair at the LHC. For their decays into three leptons, we analyze CP asymmetries which are sensitive to the CP phases of the neutralino and chargino sector. We present analytical formulas for the entire production and decay process, and identify the CP-violating contributions in the spin correlation terms. This allows us to define the optimal CP asymmetries. We present a detailed numerical analysis of the cross sections, branching ratios, and the CP observables. For light neutralinos, charginos, and squarks, the asymmetries can reach several 10%. We estimate the discovery potential for the LHC to observe CP violation in the trilepton channel.
Resumo:
Synchronous distributed generators are prone to operate islanded after contingencies, which is usually not allowed due to safety and power-quality issues. Thus, there are several anti-islanding techniques; however, most of them present technical limitations so that they are likely to fail in certain situations. Therefore, it is important to quantify and determine whether the scheme under study is adequate or not. In this context, this paper proposes an index to evaluate the effectiveness of anti-islanding frequency-based relays commonly used to protect synchronous distributed generators. The method is based on the calculation of a numerical index that indicates the time period that the system is unprotected against islanding considering the global period of analysis. Although this index can precisely be calculated based on several electromagnetic transient simulations, a practical method is also proposed to calculate it directly from simple analytical formulas or lookup tables. The results have shown that the proposed approach can assist distribution engineers to assess and set anti-islanding protection schemes.
Resumo:
Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.
Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere
Resumo:
We obtain explicit formulas for the eigenvalues of integral operators generated by continuous dot product kernels defined on the sphere via the usual gamma function. Using them, we present both, a procedure to describe sharp bounds for the eigenvalues and their asymptotic behavior near 0. We illustrate our results with examples, among them the integral operator generated by a Gaussian kernel. Finally, we sketch complex versions of our results to cover the cases when the sphere sits in a Hermitian space.
Resumo:
The Euler obstruction of a function f can be viewed as a generalization of the Milnor number for functions defined on singular spaces. In this work, using the Euler obstruction of a function, we establish several Lê–Greuel type formulas for germs f:(X,0)→(C,0) and g:(X,0)→(C,0). We give applications when g is a generic linear form and when f and g have isolated singularities.
Resumo:
This thesis presents Bayesian solutions to inference problems for three types of social network data structures: a single observation of a social network, repeated observations on the same social network, and repeated observations on a social network developing through time. A social network is conceived as being a structure consisting of actors and their social interaction with each other. A common conceptualisation of social networks is to let the actors be represented by nodes in a graph with edges between pairs of nodes that are relationally tied to each other according to some definition. Statistical analysis of social networks is to a large extent concerned with modelling of these relational ties, which lends itself to empirical evaluation. The first paper deals with a family of statistical models for social networks called exponential random graphs that takes various structural features of the network into account. In general, the likelihood functions of exponential random graphs are only known up to a constant of proportionality. A procedure for performing Bayesian inference using Markov chain Monte Carlo (MCMC) methods is presented. The algorithm consists of two basic steps, one in which an ordinary Metropolis-Hastings up-dating step is used, and another in which an importance sampling scheme is used to calculate the acceptance probability of the Metropolis-Hastings step. In paper number two a method for modelling reports given by actors (or other informants) on their social interaction with others is investigated in a Bayesian framework. The model contains two basic ingredients: the unknown network structure and functions that link this unknown network structure to the reports given by the actors. These functions take the form of probit link functions. An intrinsic problem is that the model is not identified, meaning that there are combinations of values on the unknown structure and the parameters in the probit link functions that are observationally equivalent. Instead of using restrictions for achieving identification, it is proposed that the different observationally equivalent combinations of parameters and unknown structure be investigated a posteriori. Estimation of parameters is carried out using Gibbs sampling with a switching devise that enables transitions between posterior modal regions. The main goal of the procedures is to provide tools for comparisons of different model specifications. Papers 3 and 4, propose Bayesian methods for longitudinal social networks. The premise of the models investigated is that overall change in social networks occurs as a consequence of sequences of incremental changes. Models for the evolution of social networks using continuos-time Markov chains are meant to capture these dynamics. Paper 3 presents an MCMC algorithm for exploring the posteriors of parameters for such Markov chains. More specifically, the unobserved evolution of the network in-between observations is explicitly modelled thereby avoiding the need to deal with explicit formulas for the transition probabilities. This enables likelihood based parameter inference in a wider class of network evolution models than has been available before. Paper 4 builds on the proposed inference procedure of Paper 3 and demonstrates how to perform model selection for a class of network evolution models.
Resumo:
[EN] As is well known, in any infinite-dimensional Banach space one may find fixed point free self-maps of the unit ball, retractions of the unit ball onto its boundary, contractions of the unit sphere, and nonzero maps without positive eigenvalues and normalized eigenvectors. In this paper, we give upper and lower estimates, or even explicit formulas, for the minimal Lipschitz constant and measure of noncompactness of such maps.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
This PhD thesis describes the application of some instrumental analytical techniques suitable to the study of fundamental food products for the human diet, such as: extra virgin olive oil and dairy products. These products, widely spread in the market and with high nutritional values, are increasingly recognized healthy properties although their lipid fraction might contain some unfavorable components to the human health. The research activity has been structured in the following investigations: “Comparison of different techniques for trans fatty acids analysis” “Fatty acids analysis of outcrop milk cream samples, with particular emphasis on the content of Conjugated Linoleic Acid (CLA) and trans Fatty Acids (TFA), by using 100m high-polarity capillary column” “Evaluation of the oxidited fatty acids (OFA) content during the Parmigiano-Reggiano cheese seasoning” “Direct analysis of 4-desmethyl sterols and two dihydroxy triterpenes in saponified vegetal oils (olive oil and others) using liquid chromatography-mass spectrometry” “Quantitation of long chain poly-unsatured fatty acids (LC-PUFA) in base infant formulas by Gas Chromatography, and evaluation of the blending phases accuracy during their preparation” “Fatty acids composition of Parmigiano Reggiano cheese samples, with emphasis on trans isomers (TFA)”