955 resultados para Geometric Function Theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article is an updated and modified version of a Spanish article published in MonTi 6 (cf. Tarp 2014a). It deals with specialised translation dictionaries. Based on the principles of the function theory, it analyses the different phases and sub-phases of the translation process from a lexicographical perspective and shows that a translation dictionary should be much more than a mere bilingual dictionary if it really pretends to meet its users’ complex needs. Thereafter, it presents a global concept of a translation dictionary which includes various mono- and bilingual components in both language directions. Finally, the article discusses, by means of two concrete online projects, how this concept can be applied on the Internet in order to develop high-quality translation dictionaries with quick access to data that are still more adapted to the needs of each translator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"November 1974."

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the recent past one of the main concern of research in the field of Hypercomplex Function Theory in Clifford Algebras was the development of a variety of new tools for a deeper understanding about its true elementary roots in the Function Theory of one Complex Variable. Therefore the study of the space of monogenic (Clifford holomorphic) functions by its stratification via homogeneous monogenic polynomials is a useful tool. In this paper we consider the structure of those polynomials of four real variables with binomial expansion. This allows a complete characterization of sequences of 4D generalized monogenic Appell polynomials by three different types of polynomials. A particularly important case is that of monogenic polynomials which are simply isomorphic to the integer powers of one complex variable and therefore also called pseudo-complex powers.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We continue the investigation of the algebraic and topological structure of the algebra of Colombeau generalized functions with the aim of building up the algebraic basis for the theory of these functions. This was started in a previous work of Aragona and Juriaans, where the algebraic and topological structure of the Colombeau generalized numbers were studied. Here, among other important things, we determine completely the minimal primes of (K) over bar and introduce several invariants of the ideals of 9(Q). The main tools we use are the algebraic results obtained by Aragona and Juriaans and the theory of differential calculus on generalized manifolds developed by Aragona and co-workers. The main achievement of the differential calculus is that all classical objects, such as distributions, become Cl-functions. Our purpose is to build an independent and intrinsic theory for Colombeau generalized functions and place them in a wider context.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The moments of the hadronic spectral functions are of interest for the extraction of the strong coupling alpha(s) and other QCD parameters from the hadronic decays of the tau lepton. Motivated by the recent analyses of a large class of moments in the standard fixed-order and contour-improved perturbation theories, we consider the perturbative behavior of these moments in the framework of a QCD nonpower perturbation theory, defined by the technique of series acceleration by conformal mappings, which simultaneously implements renormalization-group summation and has a tame large-order behavior. Two recently proposed models of the Adler function are employed to generate the higher-order coefficients of the perturbation series and to predict the exact values of the moments, required for testing the properties of the perturbative expansions. We show that the contour-improved nonpower perturbation theories and the renormalization-group-summed nonpower perturbation theories have very good convergence properties for a large class of moments of the so-called ``reference model,'' including moments that are poorly described by the standard expansions. The results provide additional support for the plausibility of the description of the Adler function in terms of a small number of dominant renormalons.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A self-consistent mode coupling theory (MCT) with microscopic inputs of equilibrium pair correlation functions is developed to analyze electrolyte dynamics. We apply the theory to calculate concentration dependence of (i) time dependent ion diffusion, (ii) intermediate scattering function of the constituent ions, and (iii) ion solvation dynamics in electrolyte solution. Brownian dynamics with implicit water molecules and molecular dynamics method with explicit water are used to check the theoretical predictions. The time dependence of ionic self-diffusion coefficient and the corresponding intermediate scattering function evaluated from our MCT approach show quantitative agreement with early experimental and present Brownian dynamic simulation results. With increasing concentration, the dispersion of electrolyte friction is found to occur at increasingly higher frequency, due to the faster relaxation of the ion atmosphere. The wave number dependence of intermediate scattering function, F(k, t), exhibits markedly different relaxation dynamics at different length scales. At small wave numbers, we find the emergence of a step-like relaxation, indicating the presence of both fast and slow time scales in the system. Such behavior allows an intriguing analogy with temperature dependent relaxation dynamics of supercooled liquids. We find that solvation dynamics of a tagged ion exhibits a power law decay at long times-the decay can also be fitted to a stretched exponential form. The emergence of the power law in solvation dynamics has been tested by carrying out long Brownian dynamics simulations with varying ionic concentrations. The solvation time correlation and ion-ion intermediate scattering function indeed exhibit highly interesting, non-trivial dynamical behavior at intermediate to longer times that require further experimental and theoretical studies. (c) 2015 AIP Publishing LLC.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The optimal power-delay tradeoff is studied for a time-slotted independently and identically distributed fading point-to-point link, with perfect channel state information at both transmitter and receiver, and with random packet arrivals to the transmitter queue. It is assumed that the transmitter can control the number of packets served by controlling the transmit power in the slot. The optimal tradeoff between average power and average delay is analyzed for stationary and monotone transmitter policies. For such policies, an asymptotic lower bound on the minimum average delay of the packets is obtained, when average transmitter power approaches the minimum average power required for transmitter queue stability. The asymptotic lower bound on the minimum average delay is obtained from geometric upper bounds on the stationary distribution of the queue length. This approach, which uses geometric upper bounds, also leads to an intuitive explanation of the asymptotic behavior of average delay. The asymptotic lower bounds, along with previously known asymptotic upper bounds, are used to identify three new cases where the order of the asymptotic behavior differs from that obtained from a previously considered approximate model, in which the transmit power is a strictly convex function of real valued service batch size for every fade state.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper is aimed at establishing a statistical theory of rotational and vibrational excitation of polyatomic molecules by an intense IR laser. Starting from the Wigner function of quantum statistical mechanics, we treat the rotational motion in the classical approximation; the vibrational modes are classified into active ones which are coupled directly with the laser and the background modes which are not coupled with the laser. The reduced Wigner function, i.e., the Wigner function integrated over all background coordinates should satisfy an integro-differential equation. We introduce the idea of ``viscous damping'' to handle the interaction between the active modes and the background. The damping coefficient can be calculated with the aid of the well-known Schwartz–Slawsky–Herzfeld theory. The resulting equation is solved by the method of moment equations. There is only one adjustable parameter in our scheme; it is introduced due to the lack of precise knowledge about the molecular potential. The theory developed in this paper explains satisfactorily the recent absorption experiments of SF6 irradiated by a short pulse CO2 laser, which are in sharp contradiction with the prevailing quasi-continuum theory. We also refined the density of energy levels which is responsible for the muliphoton excitation of polyatomic molecules.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Using density functional theory calculations with HSE 06 functional, we obtained the structures of spin-polarized radicals on rutile TiO2(110), which is crucial to understand the photooxidation at the atomic level, and further calculate the thermodynamic stabilities of these radicals. By analyzing the results, we identify the structural features for hole trapping in the system, and reveal the mutual effects among the geometric structures, the energy levels of trapped hole states and their hole trapping capacities. Furthermore, the results from HSE 06 functional are compared to those from DFT + U and the stability trend of radicals against the number of slabs is tested. The effect of trapped holes on two important steps of the oxygen evolution reaction, i.e. water dissociation and the oxygen removal, is investigated and discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The present study on the characterization of probability distributions using the residual entropy function. The concept of entropy is extensively used in literature as a quantitative measure of uncertainty associated with a random phenomenon. The commonly used life time models in reliability Theory are exponential distribution, Pareto distribution, Beta distribution, Weibull distribution and gamma distribution. Several characterization theorems are obtained for the above models using reliability concepts such as failure rate, mean residual life function, vitality function, variance residual life function etc. Most of the works on characterization of distributions in the reliability context centers around the failure rate or the residual life function. The important aspect of interest in the study of entropy is that of locating distributions for which the shannon’s entropy is maximum subject to certain restrictions on the underlying random variable. The geometric vitality function and examine its properties. It is established that the geometric vitality function determines the distribution uniquely. The problem of averaging the residual entropy function is examined, and also the truncated form version of entropies of higher order are defined. In this study it is established that the residual entropy function determines the distribution uniquely and that the constancy of the same is characteristics to the geometric distribution