906 resultados para IT order list
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
Ghost imaging with classical incoherent light by third-order correlation is investigated. We discuss the similarities and the differences between ghost imaging by third-order correlation and by second-order correlation, and analyze the effect from each correlation part of the third-order correlation function on the imaging process. It is shown that the third-order correlated imaging includes richer correlated imaging effects than the second-order correlated one, while the imaging information originates mainly from the correlation of the intensity fluctuations between the test detector and each reference detector, as does ghost imaging by second-order correlation.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
The effect of group delay ripple of chirped fiber gratings on composite second-order (CSO) performance in optical fiber CATV system is investigated. We analyze the system CSO performances for different ripple amplitudes, periods and residual dispersion amounts in detail. It is found that the large ripple amplitude and small ripple period will deteriorate the system CSO performance seriously. Additionally, the residual dispersion amount has considerable effect on CSO performance in the case of small ripple amplitude and large ripple period. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
A theory of the order-disorder transformation is developed in complete generality. The general theory is used to calculate long range order parameters, short range order parameters, energy, and phase diagrams for a face centered cubic binary alloy. The theoretical results are compared to the experimental determination of the copper-gold system, Values for the two adjustable parameters are obtained.
An explanation for the behavior of magnetic alloys is developed, Curie temperatures and magnetic moments of the first transition series elements and their alloys in both the ordered and disordered states are predicted. Experimental agreement is excellent in most cases. It is predicted that the state of order can effect the magnetic properties of an alloy to a considerable extent in alloys such as Ni3Mn. The values of the adjustable parameter used to fix the level of the Curie temperature, and the adjustable parameter that expresses the effect of ordering on the Curie temperature are obtained.
Resumo:
The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.
This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.
Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).
Resumo:
Albacore and Atlantic Bluefin tuna are two pelagic fish. Atlantic Bluefin tuna is included in the IUCN red list of threatened species and albacore is considered to be near threatened, so conservation plans are needed. However, no genomic resources are available for any of them. In this study, to better understand their transcriptome we functionally annotated orthologous genes. In all, 159 SNPs distributed in 120 contigs of the muscle transcriptome were analyzed. Genes were predicted for 98 contigs (81.2%) using the bioinformatics tool BLAST. In addition, another bioinformatics tool, BLAST2GO was used in order to achieve GO terms for the genes, in which 41 sequences were given a biological process, and 39 sequences were given a molecular process. The most repeated biological process was metabolism and it is important that no cellular process was given in any of the sequences. The most abundant molecular process was binding and very few catalytic activity processes were given. From the initial 159 SNPs, 40 were aligned with a sequence in the database after BLAST2GO was run, and were polymorphic in Atlantic Bluefin tuna and monomorphic in albacore. From these 40 SNPs, 24 were located in an open reading frame of which four were non-synonymous and 20 were synonymous and 16 were not located in a known open reading frame,. This study provides information for better understanding the ecology and evolution of these species and this is important in order to establish a proper conservation plan and an appropriate management.
Resumo:
[ES]El siguiente trabajo consiste en una metodología de selección de un ERP orientada a Pymes, enfocada en las que tienen su actividad en el ámbito industrial. Se ha procedido explicando en una primera parte el contexto de las pymes españolas y la justificación de la necesidad de este proyecto. A continuación se detallan en la metodología los pasos y factores que se han tenido en cuenta para elaborar el proyecto, las fuentes consultadas y la forma de realizar el método. Por último se tiene un estudio de la oferta actual de ERPs, y la selección de los más importantes en el territorio español actualmente. Anexo I. Información detallada de todos los ERPs que se han elegido para el proyecto. A través de esta información (obtenida de páginas oficiales, foros, partners oficiales…) se han elaborado los cuestionarios. Anexo II. Se trata de una lista de cuestionarios dividida en bloques, según los diferentes departamentos de la empresa. El responsable de cada área deberá rellenar el bloque que le corresponde, señalando para cada ítem si lo considera necesario para la empresa o no. Al final se obtendrá una puntuación, que determinará cuáles son los ERPs que tienen mayor afinidad con la empresa. Anexo III. El cuestionario en formato Excel. Cuando se completa se obtienen de forma automática las puntuaciones ya ponderadas y la compatibilidad con cada ERP.
Resumo:
Proper encoding of transmitted information can improve the performance of a communication system. To recover the information at the receiver it is necessary to decode the received signal. For many codes the complexity and slowness of the decoder is so severe that the code is not feasible for practical use. This thesis considers the decoding problem for one such class of codes, the comma-free codes related to the first-order Reed-Muller codes.
A factorization of the code matrix is found which leads to a simple, fast, minimum memory, decoder. The decoder is modular and only n modules are needed to decode a code of length 2n. The relevant factorization is extended to any code defined by a sequence of Kronecker products.
The problem of monitoring the correct synchronization position is also considered. A general answer seems to depend upon more detailed knowledge of the structure of comma-free codes. However, a technique is presented which gives useful results in many specific cases.
Resumo:
Spatiotemporal instabilities in nonlinear Kerr media with arbitrary higher-order dispersions are studied by use of standard linear-stability analysis. A generic expression for instability growth rate that unifies and expands on previous results for temporal, spatial, and spatiotemporal instabilities is obtained. It is shown that all odd-order dispersions contribute nothing to instability, whereas all even-order dispersions not only affect the conventional instability regions but may also lead to the appearance of new instability regions. The role of fourth-order dispersion in spatiotemporal instabilities is studied exemplificatively to demonstrate the generic results. Numerical simulations confirm the obtained analytic results. (C) 2002 Optical Society of America.
Resumo:
Hamlet (1601), de William Shakespeare, é, desde o Fólio de 1623, circundada por um enorme e variado volume de leituras, que abrangem desde textos críticos e teóricos até as mais diversas adaptações teatrais e cinematográficas. Desde o final do século 19, o cinema vem adaptando peças de Shakespeare, fornecendo novos pontos de vista e sugestões para a encenação dessa obra ao levá-la inúmeras vezes para as telas. Dentre uma longa lista de adaptações fílmicas de Hamlet, o Hamlet mainstream de Franco Zeffirelli (1990) e o Hamlet 2000 (2000), filme independente de Michael Almereyda, compõem o corpus eleito para análise nesta dissertação. Dialogando com noções de críticos e teóricos que desenvolveram estudos sobre o conceito de adaptação, tais como André Bazin, Robert Stam e Linda Hutcheon, sugiro uma desierarquização entre a peça shakespeariana e os filmes logo, entre literatura/teatro e cinema. O objetivo final deste trabalho encontra-se na proposta de uma reflexão sobre esses filmes enquanto potenciais materiais críticos elucidativos para o estudo da peça, úteis na discussão de alguns de seus mais importantes temas e/ou questões
Resumo:
O presente trabalho tem por objetivo apresentar o papel do design no mercado brasileiro de perfumes. Parte-se da hipótese de que é ele o elemento fundamental para o bom desempenho desse segmento. Na medida em que ele possibilita a diferenciação entre as diversas embalagens, criando uma segmentação para o consumo nas mais diversas camadas sociais. Inicialmente será apresentado o universo do perfume, abordando seus aspectos técnicos e culturais. Uma relação de matérias primas utilizadas na indústria de perfumaria será fornecida. Seu propósito é proporcionar ao designer profissional e ao designer pesquisador uma referência visual dos elementos que compõe um perfume. Adiante, os principais aspectos da história do perfume no mercado nacional de perfumaria são destacados, bem como a mudança de paradigmas de consumo ao longo dessa trajetória. Segue-se com a apresentação das peculiaridades de um projeto de embalagens para esse segmento, destacando o perfil do designer, desse mercado e uma relação de termos técnicos. Por fim, um modelo para catalogação será apresentado e aplicado a um grupo de perfumes nacionais e internacionais. O estudo se encerra com uma análise das embalagens catalogadas, a fim de mostrar que existem diferentes soluções de design para comunicar os conceitos de um perfume.
Resumo:
Blowflies are insects of forensic interest as they may indicate characteristics of the environment where a body has been laying prior to the discovery. In order to estimate changes in community related to landscape and to assess if blowfly species can be used as indicators of the landscape where a corpse has been decaying, we studied the blowfly community and how it is affected by landscape in a 7,000 km(2) region during a whole year. Using baited traps deployed monthly we collected 28,507 individuals of 10 calliphorid species, 7 of them well represented and distributed in the study area. Multiple Analysis of Variance found changes in abundance between seasons in the 7 analyzed species, and changes related to land use in 4 of them (Calliphora vomitoria, Lucilia ampullacea, L. caesar and L. illustris). Generalised Linear Model analyses of abundance of these species compared with landscape descriptors at different scales found only a clear significant relationship between summer abundance of C. vomitoria and distance to urban areas and degree of urbanisation. This relationship explained more deviance when considering the landscape composition at larger geographical scales (up to 2,500 m around sampling site). For the other species, no clear relationship between land uses and abundance was found, and therefore observed changes in their abundance patterns could be the result of other variables, probably small changes in temperature. Our results suggest that blowfly community composition cannot be used to infer in what kind of landscape a corpse has decayed, at least in highly fragmented habitats, the only exception being the summer abundance of C. vomitoria.
Resumo:
Only the first- order Doppler frequency shift is considered in current laser dual- frequency interferometers; however; the second- order Doppler frequency shift should be considered when the measurement corner cube ( MCC) moves at high velocity or variable velocity because it can cause considerable error. The influence of the second- order Doppler frequency shift on interferometer error is studied in this paper, and a model of the second- order Doppler error is put forward. Moreover, the model has been simulated with both high velocity and variable velocity motion. The simulated results show that the second- order Doppler error is proportional to the velocity of the MCC when it moves with uniform motion and the measured displacement is certain. When the MCC moves with variable motion, the second- order Doppler error concerns not only velocity but also acceleration. When muzzle velocity is zero the second- order Doppler error caused by an acceleration of 0.6g can be up to 2.5 nm in 0.4 s, which is not negligible in nanometric measurement. Moreover, when the muzzle velocity is nonzero, the accelerated motion may result in a greater error and decelerated motion may result in a smaller error.
Resumo:
La salud es un aspecto muy importante en la vida de cualquier persona, de forma que, al ocurrir cualquier contingencia que merma el estado de salud de un individuo o grupo de personas, se debe valorar estrictamente y en detalle las distintas alternativas destinadas a combatir la enfermedad. Esto se debe a que, la calidad de vida de los pacientes variará dependiendo de la alternativa elegida. La calidad de vida relacionada con la salud (CVRS) se entiende como el valor asignado a la duración de la vida, modificado por la oportunidad social, la percepción, el estado funcional y la disminución provocadas por una enfermedad, accidente, tratamiento o política (Sacristán et al, 1995). Para determinar el valor numérico asignado a la CVRS, ante una intervención, debemos beber de la teoría económica aplicada a las evaluaciones sanitarias para nuevas intervenciones. Entre los métodos de evaluación económica sanitaria, el método coste-utilidad emplea como utilidad, los años de vida ajustado por calidad (AVAC), que consiste, por un lado, tener en cuenta la calidad de vida ante una intervención médica, y por otro lado, los años estimados a vivir tras la intervención. Para determinar la calidad de vida, se emplea técnicas como el Juego Estándar, la Equivalencia Temporal y la Escala de Categoría. Estas técnicas nos proporcionan un valor numérico entre 0 y 1, siendo 0 el peor estado y 1 el estado perfecto de salud. Al entrevistar a un paciente a cerca de la utilidad en términos de salud, puede haber riesgo o incertidumbre en la pregunta planteada. En tal caso, se aplica el Juego Estándar con el fin de determinar el valor numérico de la utilidad o calidad de vida del paciente ante un tratamiento dado. Para obtener este valor, al paciente se le plantean dos escenarios: en primer lugar, un estado de salud con probabilidad de morir y de sobrevivir, y en segundo lugar, un estado de certeza. La utilidad se determina modificando la probabilidad de morir hasta llegar a la probabilidad que muestra la indiferencia del individuo entre el estado de riesgo y el estado de certeza. De forma similar, tenemos la equivalencia temporal, cuya aplicación resulta más fácil que el juego estándar ya que valora en un eje de ordenadas y abscisas, el valor de la salud y el tiempo a cumplir en esa situación ante un tratamiento sanitario, de forma que, se llega al valor correspondiente a la calidad de vida variando el tiempo hasta que el individuo se muestre indiferente entre las dos alternativas. En último lugar, si lo que se espera del paciente es una lista de estados de salud preferidos ante un tratamiento, empleamos la Escala de Categoría, que consiste en una línea horizontal de 10 centímetros con puntuaciones desde 0 a 100. La persona entrevistada coloca la lista de estados de salud según el orden de preferencia en la escala que después es normalizado a un intervalo entre 0 y 1. Los años de vida ajustado por calidad se obtienen multiplicando el valor de la calidad de vida por los años de vida estimados que vivirá el paciente. Sin embargo, ninguno de estas metodologías mencionadas consideran el factor edad, siendo necesario la inclusión de esta variable. Además, los pacientes pueden responder de manera subjetiva, situación en la que se requiere la opinión de un experto que determine el nivel de discapacidad del aquejado. De esta forma, se introduce el concepto de años de vida ajustado por discapacidad (AVAD) tal que el parámetro de utilidad de los AVAC será el complementario del parámetro de discapacidad de los AVAD Q^i=1-D^i. A pesar de que este último incorpora parámetros de ponderación de edad que no se contemplan en los AVAC. Además, bajo la suposición Q=1-D, podemos determinar la calidad de vida del individuo antes del tratamiento. Una vez obtenido los AVAC ganados, procedemos a la valoración monetaria de éstos. Para ello, partimos de la suposición de que la intervención sanitaria permite al individuo volver a realizar las labores que venía realizando. De modo que valoramos los salarios probables con una temporalidad igual a los AVAC ganados, teniendo en cuenta la limitación que supone la aplicación de este enfoque. Finalmente, analizamos los beneficios derivados del tratamiento (masa salarial probable) si empleamos la tabla GRF-95 (población femenina) y GRM-95 (población masculina).