878 resultados para positivity preserving
Resumo:
The Lucianic text of the Septuagint of the Historical Books witnessed primarily by the manuscript group L (19, 82, 93, 108, and 127) consists of at least two strata: the recensional elements, which date back to about 300 C.E., and the substratum under these recensional elements, the proto-Lucianic text. Some distinctive readings in L seem to be supported by witnesses that antedate the supposed time of the recension. These witnesses include the biblical quotations of Josephus, Hippolytus, Irenaeus, Tertullian, and Cyprian, and the Old Latin translation of the Septuagint. It has also been posited that some Lucianic readings might go back to Hebrew readings that are not found in the Masoretic text but appear in the Qumran biblical texts. This phenomenon constitutes the proto-Lucianic problem. In chapter 1 the proto-Lucianic problem and its research history are introduced. Josephus references to 1 Samuel are analyzed in chapter 2. His agreements with L are few and are mostly only apparent or, at best, coincidental. In chapters 3 6 the quotations by four early Church Fathers are analyzed. Hippolytus Septuagint text is extremely hard to establish since his quotations from 1 Samuel have only been preserved in Armenian and Georgian translations. Most of the suggested agreements between Hippolytus and L are only apparent or coincidental. Irenaeus is the most trustworthy textual witness of the four early Church Fathers. His quotations from 1 Samuel agree with L several times against codex Vaticanus (B) and all or most of the other witnesses in preserving the original text. Tertullian and Cyprian agree with L in attesting some Hebraizing approximations that do not seem to be of Hexaplaric origin. The question is more likely of early Hebraizing readings of the same tradition as the kaige recension. In chapter 7 it is noted that Origen, although a pre-Lucianic Father, does not qualify as a proto-Lucianic witness. General observations about the Old Latin witnesses as well as an analysis of the manuscript La115 are given in chapter 8. In chapter 9 the theory of the proto-Lucianic recension is discussed. In order to demonstrate the existence of the proto-Lucianic recension one should find instances of indisputable agreement between the Qumran biblical manuscripts and L in readings that are secondary in Greek. No such case can be found in the Qumran material in 1 Samuel. In the text-historical conclusions (chapter 10) it is noted that of all the suggested proto-Lucianic agreements in 1 Samuel (about 75 plus 70 in La115) more than half are only apparent or, at best, coincidental. Of the indisputable agreements, however, 26 are agreements in the original reading. In about 20 instances the agreement is in a secondary reading. These agreements are early variants; mostly minor changes that happen all the time in the course of transmission. Four of the agreements, however, are in a pre-Hexaplaric Hebraizing approximation that has found its way independently into the pre-Lucianic witnesses and the Lucianic recension. The study aims at demonstrating the value of the Lucianic text as a textual witness: under the recensional layer(s) there is an ancient text that preserves very old, even original readings which have not been preserved in B and most of the other witnesses. The study also confirms the value of the early Church Fathers as textual witnesses.
Resumo:
Avoiding the loss of coherence of quantum mechanical states is an important prerequisite for quantum information processing. Dynamical decoupling (DD) is one of the most effective experimental methods for maintaining coherence, especially when one can access only the qubit system and not its environment (bath). It involves the application of pulses to the system whose net effect is a reversal of the system-environment interaction. In any real system, however, the environment is not static, and therefore the reversal of the system-environment interaction becomes imperfect if the spacing between refocusing pulses becomes comparable to or longer than the correlation time of the environment. The efficiency of the refocusing improves therefore if the spacing between the pulses is reduced. Here, we quantify the efficiency of different DD sequences in preserving different quantum states. We use C-13 nuclear spins as qubits and an environment of H-1 nuclear spins as the environment, which couples to the qubit via magnetic dipole-dipole couplings. Strong dipole-dipole couplings between the proton spins result in a rapidly fluctuating environment with a correlation time of the order of 100 mu s. Our experimental results show that short delays between the pulses yield better performance if they are compared with the bath correlation time. However, as the pulse spacing becomes shorter than the bath correlation time, an optimum is reached. For even shorter delays, the pulse imperfections dominate over the decoherence losses and cause the quantum state to decay.
Resumo:
A numerical integration procedure for rotational motion using a rotation vector parametrization is explored from an engineering perspective by using rudimentary vector analysis. The incremental rotation vector, angular velocity and acceleration correspond to different tangent spaces of the rotation manifold at different times and have a non-vectorial character. We rewrite the equation of motion in terms of vectors lying in the same tangent space, facilitating vector space operations consistent with the underlying geometric structure. While any integration algorithm (that works within a vector space setting) may be used, we presently employ a family of explicit Runge-Kutta algorithms to solve this equation. While this work is primarily motivated out of a need for highly accurate numerical solutions of dissipative rotational systems of engineering interest, we also compare the numerical performance of the present scheme with some of the invariant preserving schemes, namely ALGO-C1, STW, LIEMIDEA] and SUBCYC-M. Numerical results show better local accuracy via the present approach vis-a-vis the preserving algorithms. It is also noted that the preserving algorithms do not simultaneously preserve all constants of motion. We incorporate adaptive time-stepping within the present scheme and this in turn enables still higher accuracy and a `near preservation' of constants of motion over significantly longer intervals. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
Hamiltonian systems in stellar and planetary dynamics are typically near integrable. For example, Solar System planets are almost in two-body orbits, and in simulations of the Galaxy, the orbits of stars seem regular. For such systems, sophisticated numerical methods can be developed through integrable approximations. Following this theme, we discuss three distinct problems. We start by considering numerical integration techniques for planetary systems. Perturbation methods (that utilize the integrability of the two-body motion) are preferred over conventional "blind" integration schemes. We introduce perturbation methods formulated with Cartesian variables. In our numerical comparisons, these are superior to their conventional counterparts, but, by definition, lack the energy-preserving properties of symplectic integrators. However, they are exceptionally well suited for relatively short-term integrations in which moderately high positional accuracy is required. The next exercise falls into the category of stability questions in solar systems. Traditionally, the interest has been on the orbital stability of planets, which have been quantified, e.g., by Liapunov exponents. We offer a complementary aspect by considering the protective effect that massive gas giants, like Jupiter, can offer to Earth-like planets inside the habitable zone of a planetary system. Our method produces a single quantity, called the escape rate, which characterizes the system of giant planets. We obtain some interesting results by computing escape rates for the Solar System. Galaxy modelling is our third and final topic. Because of the sheer number of stars (about 10^11 in Milky Way) galaxies are often modelled as smooth potentials hosting distributions of stars. Unfortunately, only a handful of suitable potentials are integrable (harmonic oscillator, isochrone and Stäckel potential). This severely limits the possibilities of finding an integrable approximation for an observed galaxy. A solution to this problem is torus construction; a method for numerically creating a foliation of invariant phase-space tori corresponding to a given target Hamiltonian. Canonically, the invariant tori are constructed by deforming the tori of some existing integrable toy Hamiltonian. Our contribution is to demonstrate how this can be accomplished by using a Stäckel toy Hamiltonian in ellipsoidal coordinates.
Resumo:
The relationship between the Orthodox Churches and the World Council of Churches (WCC) became a crisis just before the 8th Assembly of the WCC in Harare, Zimbabwe in 1998. The Special Commission on Orthodox Participation in the WCC (SC), inaugurated in Harare, worked during the period 1999 2002 to solve the crisis and to secure the Orthodox participation in the WCC. The purpose of this study is: 1) to clarify the theological motives for the inauguration of the SC and the theological argumentation of the Orthodox criticism; 2) to write a reliable history and analysis of the SC; 3) to outline the theological argumentation, which structures the debate, and 4) to investigate the ecclesiological questions that arise from the SC material. The study spans the years 1998 to 2006, from the WCC Harare Assembly to the Porto Alegre Assembly. Hence, the initiation and immediate reception of the Special Commission are included in the study. The sources of this study are all the material produced by and for the SC. The method employed is systematic analysis. The focus of the study is on theological argumentation; the historical context and political motives that played a part in the Orthodox-WCC relations are not discussed in detail. The study shows how the initial, specific and individual Orthodox concerns developed into a profound ecclesiological discussion and also led to concrete changes in WCC practices, the best known of which is the change to decision-making by consensus. The Final Report of the SC contains five main themes, namely, ecclesiology, decision-making, worship/common prayer, membership and representation, and social and ethical issues. The main achievement of the SC was that it secured the Orthodox membership in the WCC. The ecclesiological conclusions made in the Final Report are twofold. On the one hand, it confirms that the very act of belonging to the WCC means the commitment to discuss the relationship between a church and churches. The SC recommended that baptism should be added as a criterion for membership in the WCC, and the member churches should continue to work towards the mutual recognition of each other s baptism. These elements strengthen the ecclesiological character of the WCC. On the other hand, when the Final Report discusses common prayer, the ecclesiological conclusions are much more cautious, and the ecclesiological neutrality of the WCC is emphasized several times. The SC repeatedly emphasized that the WCC is a fellowship of churches. The concept of koinonia, which has otherwise been important in recent ecclesiological questions, was not much applied by the SC. The comparison of the results of the SC to parallel ecclesiological documents of the WCC (Nature and Mission of the Church, Called to Be the One Church) shows that they all acknowledge the different ecclesiological starting points of the member churches, and, following that, a variety of legitimate views on the relation of the Church to the churches. Despite the change from preserving the koinonia to promises of eschatological koinonia, all the documents affirm that the goal of the ecumenical movement is still full, visible unity.
Resumo:
This paper may be considered as a sequel to one of our earlier works pertaining to the development of an upwind algorithm for meshless solvers. While the earlier work dealt with the development of an inviscid solution procedure, the present work focuses on its extension to viscous flows. A robust viscous discretization strategy is chosen based on positivity of a discrete Laplacian. This work projects meshless solver as a viable cartesian grid methodology. The point distribution required for the meshless solver is obtained from a hybrid cartesian gridding strategy. Particularly considering the importance of an hybrid cartesian mesh for RANS computations, the difficulties encountered in a conventional least squares based discretization strategy are highlighted. In this context, importance of discretization strategies which exploit the local structure in the grid is presented, along with a suitable point sorting strategy. Of particular interest is the proposed discretization strategies (both inviscid and viscous) within the structured grid block; a rotated update for the inviscid part and a Green-Gauss procedure based positive update for the viscous part. Both these procedures conveniently avoid the ill-conditioning associated with a conventional least squares procedure in the critical region of structured grid block. The robustness and accuracy of such a strategy is demonstrated on a number of standard test cases including a case of a multi-element airfoil. The computational efficiency of the proposed meshless solver is also demonstrated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
It is shown that the euclideanized Yukawa theory, with the Dirac fermion belonging to an irreducible representation of the Lorentz group, is not bounded from below. A one parameter family of supersymmetric actions is presented which continuously interpolates between the N = 2 SSYM and the N = 2 supersymmetric topological theory. In order to obtain a theory which is bounded from below and satisfies Osterwalder-Schrader positivity, the Dirac fermion should belong to a reducible representation of the Lorentz group and the scalar fields have to be reinterpreted as the extra components of a higher dimensional vector field.
Resumo:
Background: Two clinically relevant high-risk HPV (HR-HPV) types 16 and 18 are etiologically associated with the development of cervical carcinoma and are also reported to be present in many other carcinomas in extra-genital organ sites. Presence of HPV has been reported in breast carcinoma which is the second most common cancer in India and is showing a fast rising trend in urban population. The two early genes E6 and E7 of HPV type 16 have been shown to immortalize breast epithelial cells in vitro, but the role of HPV infection in breast carcinogenesis is highly controversial. Present study has therefore been undertaken to analyze the prevalence of HPV infection in both breast cancer tissues and blood samples from a large number of Indian women with breast cancer from different geographic regions. Methods: The presence of all mucosal HPVs and the most common high-risk HPV types 16 and 18 DNA was detected by two different PCR methods - (i) conventional PCR assays using consensus primers (MY09/11, or GP5 +/GP6+) or HPV16 E6/E7 primers and (ii) highly sensitive Real-Time PCR. A total of 228 biopsies and corresponding 142 blood samples collected prospectively from 252 patients from four different regions of India with significant socio-cultural, ethnic and demographic variations were tested. Results: All biopsies and blood samples of breast cancer patients tested by PCR methods did not show positivity for HPV DNA sequences in conventional PCRs either by MY09/11 or by GP5+/GP6+/HPV16 E6/E7 primers. Further testing of these samples by real time PCR also failed to detect HPV DNA sequences. Conclusions: Lack of detection of HPV DNA either in the tumor or in the blood DNA of breast cancer patients by both conventional and real time PCR does not support a role of genital HPV in the pathogenesis of breast cancer in Indian women.
Resumo:
Nonclassicality in the sense of quantum optics is a prerequisite for entanglement in multimode radiation states. In this work we bring out the possibilities of passing from the former to the latter, via action of classicality preserving systems like beam splitters, in a transparent manner. For single-mode states, a complete description of nonclassicality is available via the classical theory of moments, as a set of necessary and sufficient conditions on the photon number distribution. We show that when the mode is coupled to an ancilla in any coherent state, and the system is then acted upon by a beam splitter, these conditions turn exactly into signatures of negativity under partial transpose (NPT) entanglement of the output state. Since the classical moment problem does not generalize to two or more modes, we turn in these cases to other familiar sufficient but not necessary conditions for nonclassicality, namely the Mandel parameter criterion and its extensions. We generalize the Mandel matrix from one-mode states to the two-mode situation, leading to a natural classification of states with varying levels of nonclassicality. For two-mode states we present a single test that can, if successful, simultaneously show nonclassicality as well as NPT entanglement. We also develop a test for NPT entanglement after beam-splitter action on a nonclassical state, tracing carefully the way in which it goes beyond the Mandel nonclassicality test. The result of three-mode beam-splitter action after coupling to an ancilla in the ground state is treated in the same spirit. The concept of genuine tripartite entanglement, and scalar measures of nonclassicality at the Mandel level for two-mode systems, are discussed. Numerous examples illustrating all these concepts are presented.
Resumo:
1. Habitat selection is a universal aspect of animal ecology that has important fitness consequences and may drive patterns of spatial organisation in ecological communities. 2. Measurements of habitat selection have mostly been carried out on single species and at the landscape level. Quantitative studies examining microhabitat selection at the community level are scarce, especially in insects. 3. In this study, microhabitat selection in a natural assemblage of cricket species was examined for the first time using resource selection functions (RSF), an approach more commonly applied in studies of macrohabitat selection. 4. The availability and differential use of six microhabitats by 13 species of crickets inhabiting a tropical evergreen forest in southern India was examined. The six available microhabitats included leaf litter-covered ground, tree trunks, dead logs, brambles, understorey and canopy foliage. The area offered by the six microhabitats was estimated using standard methods of forest structure measurement. Of the six microhabitats, the understorey and canopy accounted for approximately 70% of the total available area. 5. The use of different microhabitats by the 13 species was investigated using acoustic sampling of crickets to locate calling individuals. Using RSF, it was found that of 13 cricket species examined, 10 showed 100% selection for a specific microhabitat. Of these, two species showed fairly high selection for brambles and dead logs, which were rare microhabitats, highlighting the importance of preserving all components of forest structure.
Resumo:
Measured health signals incorporate significant details about any malfunction in a gas turbine. The attenuation of noise and removal of outliers from these health signals while preserving important features is an important problem in gas turbine diagnostics. The measured health signals are a time series of sensor measurements such as the low rotor speed, high rotor speed, fuel flow, and exhaust gas temperature in a gas turbine. In this article, a comparative study is done by varying the window length of acausal and unsymmetrical weighted recursive median filters and numerical results for error minimization are obtained. It is found that optimal filters exist, which can be used for engines where data are available slowly (three-point filter) and rapidly (seven-point filter). These smoothing filters are proposed as preprocessors of measurement delta signals before subjecting them to fault detection and isolation algorithms.
Resumo:
Interactive visualization applications benefit from simplification techniques that generate good-quality coarse meshes from high-resolution meshes that represent the domain. These meshes often contain interesting substructures, called embedded structures, and it is desirable to preserve the topology of the embedded structures during simplification, in addition to preserving the topology of the domain. This paper describes a proof that link conditions, proposed earlier, are sufficient to ensure that edge contractions preserve the topology of the embedded structures and the domain. Excluding two specific configurations, the link conditions are also shown to be necessary for topology preservation. Repeated application of edge contraction on an extended complex produces a coarser representation of the domain and the embedded structures. An extension of the quadric error metric is used to schedule edge contractions, resulting in a good-quality coarse mesh that closely approximates the input domain and the embedded structures.
Resumo:
The use of delayed coefficient adaptation in the least mean square (LMS) algorithm has enabled the design of pipelined architectures for real-time transversal adaptive filtering. However, the convergence speed of this delayed LMS (DLMS) algorithm, when compared with that of the standard LMS algorithm, is degraded and worsens with increase in the adaptation delay. Existing pipelined DLMS architectures have large adaptation delay and hence degraded convergence speed. We in this paper, first present a pipelined DLMS architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph representation of the DLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates, limited only by the delay of a full adder and a 2-to-1 multiplexer. In the second part of this paper, we extend the synthesis methodology described in the first part, to synthesize pipelined DLMS architectures whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the DLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. (C) 1999 Elsevier Science B.V. All rights resented.
Resumo:
Filtering methods are explored for removing noise from data while preserving sharp edges that many indicate a trend shift in gas turbine measurements. Linear filters are found to be have problems with removing noise while preserving features in the signal. The nonlinear hybrid median filter is found to accurately reproduce the root signal from noisy data. Simulated faulty data and fault-free gas path measurement data are passed through median filters and health residuals for the data set are created. The health residual is a scalar norm of the gas path measurement deltas and is used to partition the faulty engine from the healthy engine using fuzzy sets. The fuzzy detection system is developed and tested with noisy data and with filtered data. It is found from tests with simulated fault-free and faulty data that fuzzy trend shift detection based on filtered data is very accurate with no false alarms and negligible missed alarms.
Resumo:
The removal of noise and outliers from measurement signals is a major problem in jet engine health monitoring. Topical measurement signals found in most jet engines include low rotor speed, high rotor speed. fuel flow and exhaust gas temperature. Deviations in these measurements from a baseline 'good' engine are often called measurement deltas and the health signals used for fault detection, isolation, trending and data mining. Linear filters such as the FIR moving average filter and IIR exponential average filter are used in the industry to remove noise and outliers from the jet engine measurement deltas. However, the use of linear filters can lead to loss of critical features in the signal that can contain information about maintenance and repair events that could be used by fault isolation algorithms to determine engine condition or by data mining algorithms to learn valuable patterns in the data, Non-linear filters such as the median and weighted median hybrid filters offer the opportunity to remove noise and gross outliers from signals while preserving features. In this study. a comparison of traditional linear filters popular in the jet engine industry is made with the median filter and the subfilter weighted FIR median hybrid (SWFMH) filter. Results using simulated data with implanted faults shows that the SWFMH filter results in a noise reduction of over 60 per cent compared to only 20 per cent for FIR filters and 30 per cent for IIR filters. Preprocessing jet engine health signals using the SWFMH filter would greatly improve the accuracy of diagnostic systems. (C) 2002 Published by Elsevier Science Ltd.