998 resultados para Discrete q-integration
Resumo:
An attempt is made by the researcher to establish a theory of discrete functions in the complex plane. Classical analysis q-basic theory, monodiffric theory, preholomorphic theory and q-analytic theory have been utilised to develop concepts like differentiation, integration and special functions.
Integration of an automatic storage and retrieval system (ASRS) in a discrete-part automation system
Resumo:
This technical report describes the work carried out in a project within the ERASMUS programme. The objective of this project was the Integration of an Automatic Warehouse in a Discrete-Part Automation System. The discrete-part automation system located at the LASCRI (Critical Systems) laboratory at ISEP was extended with automatic storage and retrieval of the manufacturing parts, through the integration of an automatic warehouse and an automatic guided vehicle (AGV).
Resumo:
A novel time integration scheme is presented for the numerical solution of the dynamics of discrete systems consisting of point masses and thermo-visco-elastic springs. Even considering fully coupled constitutive laws for the elements, the obtained solutions strictly preserve the two laws of thermo dynamics and the symmetries of the continuum evolution equations. Moreover, the unconditional control over the energy and the entropy growth have the effect of stabilizing the numerical solution, allowing the use of larger time steps than those suitable for comparable implicit algorithms. Proofs for these claims are provided in the article as well as numerical examples that illustrate the performance of the method.
Resumo:
The formulation of thermodynamically consistent (TC) time integration methods was introduced by a general procedure based on the GENERIC form of the evolution equations for thermo-mechanical problems. The use of the entropy was reported to be the best choice for the thermodynamical variable to easily provide TC integrators. Also the employment of the internal energy was proved to not involve excessive complications. However, attempts towards the use of the temperature in the design of GENERIC-based TC schemes have so far been unfruitful. This paper complements the said procedure to attain TC integrators by presenting a TC scheme based on the temperature as thermodynamical state variable. As a result, the problems which arise due to the use of the entropy are overcome, mainly the definition of boundary conditions. What is more, the newly proposed method exhibits the general enhanced numerical stability and robustness properties of the entropy formulation.
Resumo:
This paper reports on design of digital control for wind turbines and its relation to the quality of power fed into the Brazilian grid on connecting to it a 192 MW wind farm equipped with doubly fed induction generators. PWM converters are deployed as vector controlled regulated current voltage sources for their rotors, for independent control of both active and reactive power of those generators. Both speed control and active power control strategies are analyzed, in the search for maximum efficiency of conversion of wind kinetic energy into electric power and enhanced quality of delivered power. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The paper presents a RFDSCA automated synthesis procedure. This algorithm determines several RFDSCA circuits from the top-level system specifications all with the same maximum performance. The genetic synthesis tool optimizes a fitness function proportional to the RFDSCA quality factor and uses the epsiv-concept and maximin sorting scheme to achieve a set of solutions well distributed along a non-dominated front. To confirm the results of the algorithm, three RFDSCAs were simulated in SpectreRF and one of them was implemented and tested. The design used a 0.25 mum BiCMOS process. All the results (synthesized, simulated and measured) are very close, which indicate that the genetic synthesis method is a very useful tool to design optimum performance RFDSCAs.
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
In conducting genome-wide association studies (GWAS), analytical approaches leveraging biological information may further understanding of the pathophysiology of clinical traits. To discover novel associations with estimated glomerular filtration rate (eGFR), a measure of kidney function, we developed a strategy for integrating prior biological knowledge into the existing GWAS data for eGFR from the CKDGen Consortium. Our strategy focuses on single nucleotide polymorphism (SNPs) in genes that are connected by functional evidence, determined by literature mining and gene ontology (GO) hierarchies, to genes near previously validated eGFR associations. It then requires association thresholds consistent with multiple testing, and finally evaluates novel candidates by independent replication. Among the samples of European ancestry, we identified a genome-wide significant SNP in FBXL20 (P = 5.6 × 10(-9)) in meta-analysis of all available data, and additional SNPs at the INHBC, LRP2, PLEKHA1, SLC3A2 and SLC7A6 genes meeting multiple-testing corrected significance for replication and overall P-values of 4.5 × 10(-4)-2.2 × 10(-7). Neither the novel PLEKHA1 nor FBXL20 associations, both further supported by association with eGFR among African Americans and with transcript abundance, would have been implicated by eGFR candidate gene approaches. LRP2, encoding the megalin receptor, was identified through connection with the previously known eGFR gene DAB2 and extends understanding of the megalin system in kidney function. These findings highlight integration of existing genome-wide association data with independent biological knowledge to uncover novel candidate eGFR associations, including candidates lacking known connections to kidney-specific pathways. The strategy may also be applicable to other clinical phenotypes, although more testing will be needed to assess its potential for discovery in general.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Differential X-ray phase-contrast tomography (DPCT) refers to a class of promising methods for reconstructing the X-ray refractive index distribution of materials that present weak X-ray absorption contrast. The tomographic projection data in DPCT, from which an estimate of the refractive index distribution is reconstructed, correspond to one-dimensional (1D) derivatives of the two-dimensional (2D) Radon transform of the refractive index distribution. There is an important need for the development of iterative image reconstruction methods for DPCT that can yield useful images from few-view projection data, thereby mitigating the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods. In this work, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction in DPCT. We also investigate the use of one of the models with a modern image reconstruction algorithm for performing few-view image reconstruction of a tissue specimen.
Resumo:
This paper presents a new numerical program able to model syntectonic sedimentation. The new model combines a discrete element model of the tectonic deformation of a sedimentary cover and a process-based model of sedimentation in a single framework. The integration of these two methods allows us to include the simulation of both sedimentation and deformation processes in a single and more effective model. The paper describes briefly the antecedents of the program, Simsafadim-Clastic and a discrete element model, in order to introduce the methodology used to merge both programs to create the new code. To illustrate the operation and application of the program, analysis of the evolution of syntectonic geometries in an extensional environment and also associated with thrust fault propagation is undertaken. Using the new code, much more complex and realistic depositional structures can be simulated together with a more complex analysis of the evolution of the deformation within the sedimentary cover, which is seen to be affected by the presence of the new syntectonic sediments.
Resumo:
Currently, a high penetration level of Distributed Generations (DGs) has been observed in the Danish distribution systems, and even more DGs are foreseen to be present in the upcoming years. How to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. This master project is intended to develop a control architecture for studying purposes of distribution systems with large scale integration of solar power. As part of the EcoGrid EU Smart Grid project, it focuses on the system modelling and simulation of a Danish representative LV network located in Bornholm island. Regarding the control architecture, two types of reactive control techniques are implemented and compare. In addition, a network voltage control based on a tap changer transformer is tested. The optimized results after applying a genetic algorithm to five typical Danish domestic loads are lower power losses and voltage deviation using Q(U) control, specially with large consumptions. Finally, a communication and information exchange system is developed with the objective of regulating the reactive power and thereby, the network voltage remotely and real-time. Validation test of the simulated parameters are performed as well.
Resumo:
Cette thèse s'intéresse à l'étude des propriétés et applications de quatre familles des fonctions spéciales associées aux groupes de Weyl et dénotées $C$, $S$, $S^s$ et $S^l$. Ces fonctions peuvent être vues comme des généralisations des polynômes de Tchebyshev. Elles sont en lien avec des polynômes orthogonaux à plusieurs variables associés aux algèbres de Lie simples, par exemple les polynômes de Jacobi et de Macdonald. Elles ont plusieurs propriétés remarquables, dont l'orthogonalité continue et discrète. En particulier, il est prouvé dans la présente thèse que les fonctions $S^s$ et $S^l$ caractérisées par certains paramètres sont mutuellement orthogonales par rapport à une mesure discrète. Leur orthogonalité discrète permet de déduire deux types de transformées discrètes analogues aux transformées de Fourier pour chaque algèbre de Lie simple avec racines des longueurs différentes. Comme les polynômes de Tchebyshev, ces quatre familles des fonctions ont des applications en analyse numérique. On obtient dans cette thèse quelques formules de <