968 resultados para Analytic function theory,
Resumo:
Most electronic systems can be described in a very simplified way as an assemblage of analog and digital components put all together in order to perform a certain function. Nowadays, there is an increasing tendency to reduce the analog components, and to replace them by operations performed in the digital domain. This tendency has led to the emergence of new electronic systems that are more flexible, cheaper and robust. However, no matter the amount of digital process implemented, there will be always an analog part to be sorted out and thus, the step of converting digital signals into analog signals and vice versa cannot be avoided. This conversion can be more or less complex depending on the characteristics of the signals. Thus, even if it is desirable to replace functions carried out by analog components by digital processes, it is equally important to do so in a way that simplifies the conversion from digital to analog signals and vice versa. In the present thesis, we have study strategies based on increasing the amount of processing in the digital domain in such a way that the implementation of analog hardware stages can be simplified. To this aim, we have proposed the use of very low quantized signals, i.e. 1-bit, for the acquisition and for the generation of particular classes of signals.
Resumo:
A 2D Unconstrained Third Order Shear Deformation Theory (UTSDT) is presented for the evaluation of tangential and normal stresses in moderately thick functionally graded conical and cylindrical shells subjected to mechanical loadings. Several types of graded materials are investigated. The functionally graded material consists of ceramic and metallic constituents. A four parameter power law function is used. The UTSDT allows the presence of a finite transverse shear stress at the top and bottom surfaces of the graded shell. In addition, the initial curvature effect included in the formulation leads to the generalization of the present theory (GUTSDT). The Generalized Differential Quadrature (GDQ) method is used to discretize the derivatives in the governing equations, the external boundary conditions and the compatibility conditions. Transverse and normal stresses are also calculated by integrating the three dimensional equations of equilibrium in the thickness direction. In this way, the six components of the stress tensor at a point of the conical or cylindrical shell or panel can be given. The initial curvature effect and the role of the power law functions are shown for a wide range of functionally conical and cylindrical shells under various loading and boundary conditions. Finally, numerical examples of the available literature are worked out.
Resumo:
Coupled-cluster theory in its single-reference formulation represents one of the most successful approaches in quantum chemistry for the description of atoms and molecules. To extend the applicability of single-reference coupled-cluster theory to systems with degenerate or near-degenerate electronic configurations, multireference coupled-cluster methods have been suggested. One of the most promising formulations of multireference coupled cluster theory is the state-specific variant suggested by Mukherjee and co-workers (Mk-MRCC). Unlike other multireference coupled-cluster approaches, Mk-MRCC is a size-extensive theory and results obtained so far indicate that it has the potential to develop to a standard tool for high-accuracy quantum-chemical treatments. This work deals with developments to overcome the limitations in the applicability of the Mk-MRCC method. Therefore, an efficient Mk-MRCC algorithm has been implemented in the CFOUR program package to perform energy calculations within the singles and doubles (Mk-MRCCSD) and singles, doubles, and triples (Mk-MRCCSDT) approximations. This implementation exploits the special structure of the Mk-MRCC working equations that allows to adapt existing efficient single-reference coupled-cluster codes. The algorithm has the correct computational scaling of d*N^6 for Mk-MRCCSD and d*N^8 for Mk-MRCCSDT, where N denotes the system size and d the number of reference determinants. For the determination of molecular properties as the equilibrium geometry, the theory of analytic first derivatives of the energy for the Mk-MRCC method has been developed using a Lagrange formalism. The Mk-MRCC gradients within the CCSD and CCSDT approximation have been implemented and their applicability has been demonstrated for various compounds such as 2,6-pyridyne, the 2,6-pyridyne cation, m-benzyne, ozone and cyclobutadiene. The development of analytic gradients for Mk-MRCC offers the possibility of routinely locating minima and transition states on the potential energy surface. It can be considered as a key step towards routine investigation of multireference systems and calculation of their properties. As the full inclusion of triple excitations in Mk-MRCC energy calculations is computational demanding, a parallel implementation is presented in order to circumvent limitations due to the required execution time. The proposed scheme is based on the adaption of a highly efficient serial Mk-MRCCSDT code by parallelizing the time-determining steps. A first application to 2,6-pyridyne is presented to demonstrate the efficiency of the current implementation.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
This work contains several applications of the mode-coupling theory (MCT) and is separated into three parts. In the first part we investigate the liquid-glass transition of hard spheres for dimensions d→∞ analytically and numerically up to d=800 in the framework of MCT. We find that the critical packing fraction ϕc(d) scales as d²2^(-d), which is larger than the Kauzmann packing fraction ϕK(d) found by a small-cage expansion by Parisi and Zamponi [J. Stat. Mech.: Theory Exp. 2006, P03017 (2006)]. The scaling of the critical packing fraction is different from the relation ϕc(d)∼d2^(-d) found earlier by Kirkpatrick and Wolynes [Phys. Rev. A 35, 3072 (1987)]. This is due to the fact that the k dependence of the critical collective and self nonergodicity parameters fc(k;d) and fcs(k;d) was assumed to be Gaussian in the previous theories. We show that in MCT this is not the case. Instead fc(k;d) and fcs(k;d), which become identical in the limit d→∞, converge to a non-Gaussian master function on the scale k∼d^(3/2). We find that the numerically determined value for the exponent parameter λ and therefore also the critical exponents a and b depend on the dimension d, even at the largest evaluated dimension d=800. In the second part we compare the results of a molecular-dynamics simulation of liquid Lennard-Jones argon far away from the glass transition [D. Levesque, L. Verlet, and J. Kurkijärvi, Phys. Rev. A 7, 1690 (1973)] with MCT. We show that the agreement between theory and computer simulation can be improved by taking binary collisions into account [L. Sjögren, Phys. Rev. A 22, 2866 (1980)]. We find that an empiric prefactor of the memory function of the original MCT equations leads to similar results. In the third part we derive the equations for a mode-coupling theory for the spherical components of the stress tensor. Unfortunately it turns out that they are too complex to be solved numerically.
Resumo:
In this thesis we provide a characterization of probabilistic computation in itself, from a recursion-theoretical perspective, without reducing it to deterministic computation. More specifically, we show that probabilistic computable functions, i.e., those functions which are computed by Probabilistic Turing Machines (PTM), can be characterized by a natural generalization of Kleene's partial recursive functions which includes, among initial functions, one that returns identity or successor with probability 1/2. We then prove the equi-expressivity of the obtained algebra and the class of functions computed by PTMs. In the the second part of the thesis we investigate the relations existing between our recursion-theoretical framework and sub-recursive classes, in the spirit of Implicit Computational Complexity. More precisely, endowing predicative recurrence with a random base function is proved to lead to a characterization of polynomial-time computable probabilistic functions.
Resumo:
Relativistic effects need to be considered in quantum-chemical calculations on systems including heavy elements or when aiming at high accuracy for molecules containing only lighter elements. In the latter case, consideration of relativistic effects via perturbation theory is an attractive option. Among the available techniques, Direct Perturbation Theory (DPT) in its lowest order (DPT2) has become a standard tool for the calculation of relativistic corrections to energies and properties.In this work, the DPT treatment is extended to the next order (DPT4). It is demonstrated that the DPT4 correction can be obtained as a second derivative of the energy with respect to the relativistic perturbation parameter. Accordingly, differentiation of a suitable Lagrangian, thereby taking into account all constraints on the wave function, provides analytic expressions for the fourth-order energy corrections. The latter have been implemented at the Hartree-Fock level and within second-order Møller-Plesset perturbaton theory using standard analytic second-derivative techniques into the CFOUR program package. For closed-shell systems, the DPT4 corrections consist of higher-order scalar-relativistic effects as well as spin-orbit corrections with the latter appearing here for the first time in the DPT series.Relativistic corrections are reported for energies as well as for first-order electrical properties and compared to results from rigorous four-component benchmark calculations in order to judge the accuracy and convergence of the DPT expansion for both the scalar-relativistic as well as the spin-orbit contributions. Additionally, the importance of relativistic effects to the bromine and iodine quadrupole-coupling tensors is investigated in a joint experimental and theoretical study concerning the rotational spectra of CH2BrF, CHBrF2, and CH2FI.
Resumo:
Zusammenfassung In der vorliegenden Arbeit besch¨aftige ich mich mit Differentialgleichungen von Feynman– Integralen. Ein Feynman–Integral h¨angt von einem Dimensionsparameter D ab und kann f¨ur ganzzahlige Dimension als projektives Integral dargestellt werden. Dies ist die sogenannte Feynman–Parameter Darstellung. In Abh¨angigkeit der Dimension kann ein solches Integral divergieren. Als Funktion in D erh¨alt man eine meromorphe Funktion auf ganz C. Ein divergentes Integral kann also durch eine Laurent–Reihe ersetzt werden und dessen Koeffizienten r¨ucken in das Zentrum des Interesses. Diese Vorgehensweise wird als dimensionale Regularisierung bezeichnet. Alle Terme einer solchen Laurent–Reihe eines Feynman–Integrals sind Perioden im Sinne von Kontsevich und Zagier. Ich beschreibe eine neue Methode zur Berechnung von Differentialgleichungen von Feynman– Integralen. ¨ Ublicherweise verwendet man hierzu die sogenannten ”integration by parts” (IBP)– Identit¨aten. Die neue Methode verwendet die Theorie der Picard–Fuchs–Differentialgleichungen. Im Falle projektiver oder quasi–projektiver Variet¨aten basiert die Berechnung einer solchen Differentialgleichung auf der sogenannten Griffiths–Dwork–Reduktion. Zun¨achst beschreibe ich die Methode f¨ur feste, ganzzahlige Dimension. Nach geeigneter Verschiebung der Dimension erh¨alt man direkt eine Periode und somit eine Picard–Fuchs–Differentialgleichung. Diese ist inhomogen, da das Integrationsgebiet einen Rand besitzt und daher nur einen relativen Zykel darstellt. Mit Hilfe von dimensionalen Rekurrenzrelationen, die auf Tarasov zur¨uckgehen, kann in einem zweiten Schritt die L¨osung in der urspr¨unglichen Dimension bestimmt werden. Ich beschreibe außerdem eine Methode, die auf der Griffiths–Dwork–Reduktion basiert, um die Differentialgleichung direkt f¨ur beliebige Dimension zu berechnen. Diese Methode ist allgemein g¨ultig und erspart Dimensionswechsel. Ein Erfolg der Methode h¨angt von der M¨oglichkeit ab, große Systeme von linearen Gleichungen zu l¨osen. Ich gebe Beispiele von Integralen von Graphen mit zwei und drei Schleifen. Tarasov gibt eine Basis von Integralen an, die Graphen mit zwei Schleifen und zwei externen Kanten bestimmen. Ich bestimme Differentialgleichungen der Integrale dieser Basis. Als wichtigstes Beispiel berechne ich die Differentialgleichung des sogenannten Sunrise–Graphen mit zwei Schleifen im allgemeinen Fall beliebiger Massen. Diese ist f¨ur spezielle Werte von D eine inhomogene Picard–Fuchs–Gleichung einer Familie elliptischer Kurven. Der Sunrise–Graph ist besonders interessant, weil eine analytische L¨osung erst mit dieser Methode gefunden werden konnte, und weil dies der einfachste Graph ist, dessen Master–Integrale nicht durch Polylogarithmen gegeben sind. Ich gebe außerdem ein Beispiel eines Graphen mit drei Schleifen. Hier taucht die Picard–Fuchs–Gleichung einer Familie von K3–Fl¨achen auf.
Resumo:
In der vorliegenden Arbeit wird die Theorie der analytischen zweiten Ableitungen für die EOMIP-CCSD-Methode formuliert sowie die durchgeführte Implementierung im Quantenchemieprogramm CFOUR beschrieben. Diese Ableitungen sind von Bedeutung bei der Bestimmung statischer Polarisierbarkeiten und harmonischer Schwingungsfrequenzen und in dieser Arbeit wird die Genauigkeit des EOMIP-CCSD-Ansatzes bei der Berechnung dieser Eigenschaften für verschiedene radikalische Systeme untersucht. Des Weiteren können mit Hilfe der ersten und zweiten Ableitungen vibronische Kopplungsparameter berechnet werden, welche zur Simulation von Molekülspektren in Kombination mit dem Köppel-Domcke-Cederbaum (KDC)-Modell - in der Arbeit am Beispiel des Formyloxyl (HCO2)-Radikals demonstriert - benötigt werden.rnrnDer konzeptionell einfache EOMIP-CC-Ansatz wurde gewählt, da hier die Wellenfunktion eines Radikalsystems ausgehend von einem stabilen geschlossenschaligen Zustand durch die Entfernung eines Elektrons gebildet wird und somit die Problematik der Symmetriebrechung umgangen werden kann. Im Rahmen der Implementierung wurden neue Programmteile zur Lösung der erforderlichen Gleichungen für die gestörten EOMIP-CC-Amplituden und die gestörten Lagrange-Multiplikatoren zeta zum Quantenchemieprogramm CFOUR hinzugefügt. Die unter Verwendung des Programms bestimmten Eigenschaften werden hinsichtlich ihrer Leistungsfähigkeit im Vergleich zu etablierten Methoden wie z.B. CCSD(T) untersucht. Bei der Berechnung von Polarisierbarkeiten und harmonischen Schwingungsfrequenzen liefert die EOMIP-CCSD-Theorie meist gute Resultate, welche nur wenig von den CCSD(T)-Ergebnissen abweichen. Einzig bei der Betrachtung von Radikalen, für die die entsprechenden Anionen nicht stabil sind (z.B. NH2⁻ und CH3⁻), liefert der EOMIP-CCSD-Ansatz aufgrund methodischer Nachteile keine aussagekräftige Beschreibung. rnrnDie Ableitungen der EOMIP-CCSD-Energie lassen sich auch zur Simulation vibronischer Kopplungen innerhalb des KDC-Modells einsetzen.rnZur Kopplung verschiedener radikalischer Zustände in einem solchen Modellpotential spielen vor allem die Ableitungen von Übergangsmatrixelementen eine wichtige Rolle. Diese sogenannten Kopplungskonstanten können in der EOMIP-CC-Theorie besonders leicht definiert und berechnet werden. Bei der Betrachtung des Photoelektronenspektrums von HCO2⁻ werden zwei Alternativen untersucht: Die vertikale Bestimmung an der Gleichgewichtsgeometrie des HCO2⁻-Anions und die Ermittlung adiabatischer Kraftkonstanten an den Gleichgewichtsgeometrien des Radikals. Lediglich das adiabatische Modell liefert bei Beschränkung auf harmonische Kraftkonstanten eine qualitativ sinnvolle Beschreibung des Spektrums. Erweitert man beide Modelle um kubische und quartische Kraftkonstanten, so nähern sich diese einander an und ermöglichen eine vollständige Zuordnung des gemessenen Spektrums innerhalb der ersten 1500 cm⁻¹. Die adiabatische Darstellung erreicht dabei nahezu quantitative Genauigkeit.
Resumo:
The European Society of Cardiology heart failure guidelines firmly recommend regular physical activity and structured exercise training (ET), but this recommendation is still poorly implemented in daily clinical practice outside specialized centres and in the real world of heart failure clinics. In reality, exercise intolerance can be successfully tackled by applying ET. We need to encourage the mindset that breathlessness may be evidence of signalling between the periphery and central haemodynamic performance and regular physical activity may ultimately bring about favourable changes in myocardial function, symptoms, functional capacity, and increased hospitalization-free life span and probably survival. In this position paper, we provide practical advice for the application of exercise in heart failure and how to overcome traditional barriers, based on the current scientific and clinical knowledge supporting the beneficial effect of this intervention.
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.
Resumo:
The concordance probability is used to evaluate the discriminatory power and the predictive accuracy of nonlinear statistical models. We derive an analytic expression for the concordance probability in the Cox proportional hazards model. The proposed estimator is a function of the regression parameters and the covariate distribution only and does not use the observed event and censoring times. For this reason it is asymptotically unbiased, unlike Harrell's c-index based on informative pairs. The asymptotic distribution of the concordance probability estimate is derived using U-statistic theory and the methodology is applied to a predictive model in lung cancer.
Resumo:
Outcome-dependent, two-phase sampling designs can dramatically reduce the costs of observational studies by judicious selection of the most informative subjects for purposes of detailed covariate measurement. Here we derive asymptotic information bounds and the form of the efficient score and influence functions for the semiparametric regression models studied by Lawless, Kalbfleisch, and Wild (1999) under two-phase sampling designs. We show that the maximum likelihood estimators for both the parametric and nonparametric parts of the model are asymptotically normal and efficient. The efficient influence function for the parametric part aggress with the more general information bound calculations of Robins, Hsieh, and Newey (1995). By verifying the conditions of Murphy and Van der Vaart (2000) for a least favorable parametric submodel, we provide asymptotic justification for statistical inference based on profile likelihood.
Resumo:
In Malani and Neilsen (1992) we have proposed alternative estimates of survival function (for time to disease) using a simple marker that describes time to some intermediate stage in a disease process. In this paper we derive the asymptotic variance of one such proposed estimator using two different methods and compare terms of order 1/n when there is no censoring. In the absence of censoring the asymptotic variance obtained using the Greenwood type approach converges to exact variance up to terms involving 1/n. But the asymptotic variance obtained using the theory of the counting process and results from Voelkel and Crowley (1984) on semi-Markov processes has a different term of order 1/n. It is not clear to us at this point why the variance formulae using the latter approach give different results.