959 resultados para Poles and zeros
Resumo:
In this work, a methodology is proposed to find the dynamic poles of a capacitive pressure transmitter in order to enhance and extend the online surveillance of this type of sensor based on the response time measurement by applying noise analysis techniques and the dynamic data system procedure. Several measurements taken from a pressurized water reactor have been analyzed. The methodology proposes an autoregressive fit whose order is determined by the sensor dynamic poles. Nevertheless, the signals that have been analyzed could not be filtered properly in order to remove the plant noise; thus, this was considered as an additional pair of complex conjugate poles. With this methodology we have come up with the numerical value of the sensor second real pole in spite of its low influence on the sensor dynamic response. This opens up a more accurate online sensor surveillance since the previous methods were achieved by considering one real pole only.
Resumo:
In this work, a methodology is proposed to find the dynamics poles of a capacitive pressure transmitter in order to enhance and extend the on line surveillance of this type of sensors based on the response time measurement by applying noise analysis techniques and the Dynamic Data System. Several measurements have been analyzed taken from a Pressurized Water Reactor. The methodology proposes an autoregressive fit whose order is determined by the sensor dynamics poles. Nevertheless, the signals that have been analyzed, could not be filtered properly in order to remove the plant noise, thus, this was considered as an additional pair of complex conjugate poles. With this methodology we have come up with the numerical value of the sensor second real pole in spite of its low influence on the sensor dynamic response. This opens up a more accurate on line sensor surveillance since the previous methods were achieved by considering one real pole only.
Resumo:
We give necessary and sufficient conditions for the convergence with geometric rate of the common denominators of simultaneous rational interpolants with a bounded number of poles. The conditions are expressed in terms of intrinsic properties of the system of functions used to build the approximants. Exact rates of convergence for these denominators and the simultaneous rational approximants are provided.
Resumo:
In recent years, international cooperation processes have become a key mechanism for companies to internationalise their innovative activities, par ticularly in the case of small businesses whose size reduces their possibilities of developing internationalisation strategies autonomously in the same way as larger companies. In Spain, the existence of two parallel programmes with similar structures oriented towards Europe (EUREKA) and Latin America (IBEROEKA) raises the question as to whether the fact that companies participate in only one (unipolar) or both (bipolar) of these programmes is the result of a selection process, which, in turn, results in the existence of different collectives with different efficiency parameters. The aim of this study is to provide a comparative analysis based on the final reports of Spanish companies that have participated in the EUREKA programme. Two groups of companies were compared: one comprising companies that have only had international experience in Europe (EUREKA); and another formed by companies that have also carried out IBEROEKA projects. The conclusions confirm that the behaviour of both groups of companies differs substantially and reveal the importance of geographical perspective in the analysis of international cooperation in technology. This disparate behaviour is a relevant aspect that must be taken into account when designing policies to promote international technological cooperation.
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
Although vertebrate cytoplasmic dynein can move to the minus ends of microtubules in vitro, its ability to translocate purified vesicles on microtubules depends on the presence of an accessory complex known as dynactin. We have cloned and characterized a novel gene, NIP100, which encodes the yeast homologue of the vertebrate dynactin complex protein p150glued. Like strains lacking the cytoplasmic dynein heavy chain Dyn1p or the centractin homologue Act5p, nip100Δ strains are viable but undergo a significant number of failed mitoses in which the mitotic spindle does not properly partition into the daughter cell. Analysis of spindle dynamics by time-lapse digital microscopy indicates that the precise role of Nip100p during anaphase is to promote the translocation of the partially elongated mitotic spindle through the bud neck. Consistent with the presence of a true dynactin complex in yeast, Nip100p exists in a stable complex with Act5p as well as Jnm1p, another protein required for proper spindle partitioning during anaphase. Moreover, genetic depletion experiments indicate that the binding of Nip100p to Act5p is dependent on the presence of Jnm1p. Finally, we find that a fusion of Nip100p to the green fluorescent protein localizes to the spindle poles throughout the cell cycle. Taken together, these results suggest that the yeast dynactin complex and cytoplasmic dynein together define a physiological pathway that is responsible for spindle translocation late in anaphase.
Resumo:
We have identified a mutant allele of the DAM1 gene in a screen for mutations that are lethal in combination with the mps1-1 mutation. MPS1 encodes an essential protein kinase that is required for duplication of the spindle pole body and for the spindle assembly checkpoint. Mutations in six different genes were found to be lethal in combination with mps1-1, of which only DAM1 was novel. The remaining genes encode a checkpoint protein, Bub1p, and four chaperone proteins, Sti1p, Hsc82p, Cdc37p, and Ydj1p. DAM1 is an essential gene that encodes a protein recently described as a member of a microtubule binding complex. We report here that cells harboring the dam1-1 mutation fail to maintain spindle integrity during anaphase at the restrictive temperature. Consistent with this phenotype, DAM1 displays genetic interactions with STU1, CIN8, and KAR3, genes encoding proteins involved in spindle function. We have observed that a Dam1p-Myc fusion protein expressed at endogenous levels and localized by immunofluorescence microscopy, appears to be evenly distributed along short mitotic spindles but is found at the spindle poles at later times in mitosis.
Resumo:
Oral squamous cell carcinomas are characterized by complex, often near-triploid karyotypes with structural and numerical variations superimposed on the initial clonal chromosomal alterations. We used immunohistochemistry combined with classical cytogenetic analysis and spectral karyotyping to investigate the chromosomal segregation defects in cultured oral squamous cell carcinoma cells. During division, these cells frequently exhibit lagging chromosomes at both metaphase and anaphase, suggesting defects in the mitotic apparatus or kinetochore. Dicentric anaphase chromatin bridges and structurally altered chromosomes with consistent long arms and variable short arms, as well as the presence of gene amplification, suggested the occurrence of breakage–fusion–bridge cycles. Some anaphase bridges were observed to persist into telophase, resulting in chromosomal exclusion from the reforming nucleus and micronucleus formation. Multipolar spindles were found to various degrees in the oral squamous cell carcinoma lines. In the multipolar spindles, the poles demonstrated different levels of chromosomal capture and alignment, indicating functional differences between the poles. Some spindle poles showed premature splitting of centrosomal material, a precursor to full separation of the microtubule organizing centers. These results indicate that some of the chromosomal instability observed within these cancer cells might be the result of cytoskeletal defects and breakage–fusion–bridge cycles.
Resumo:
Patterns in sequences of amino acid hydrophobic free energies predict secondary structures in proteins. In protein folding, matches in hydrophobic free energy statistical wavelengths appear to contribute to selective aggregation of secondary structures in “hydrophobic zippers.” In a similar setting, the use of Fourier analysis to characterize the dominant statistical wavelengths of peptide ligands’ and receptor proteins’ hydrophobic modes to predict such matches has been limited by the aliasing and end effects of short peptide lengths, as well as the broad-band, mode multiplicity of many of their frequency (power) spectra. In addition, the sequence locations of the matching modes are lost in this transformation. We make new use of three techniques to address these difficulties: (i) eigenfunction construction from the linear decomposition of the lagged covariance matrices of the ligands and receptors as hydrophobic free energy sequences; (ii) maximum entropy, complex poles power spectra, which select the dominant modes of the hydrophobic free energy sequences or their eigenfunctions; and (iii) discrete, best bases, trigonometric wavelet transformations, which confirm the dominant spectral frequencies of the eigenfunctions and locate them as (absolute valued) moduli in the peptide or receptor sequence. The leading eigenfunction of the covariance matrix of a transmembrane receptor sequence locates the same transmembrane segments seen in n-block-averaged hydropathy plots while leaving the remaining hydrophobic modes unsmoothed and available for further analyses as secondary eigenfunctions. In these receptor eigenfunctions, we find a set of statistical wavelength matches between peptide ligands and their G-protein and tyrosine kinase coupled receptors, ranging across examples from 13.10 amino acids in acid fibroblast growth factor to 2.18 residues in corticotropin releasing factor. We find that the wavelet-located receptor modes in the extracellular loops are compatible with studies of receptor chimeric exchanges and point mutations. A nonbinding corticotropin-releasing factor receptor mutant is shown to have lost the signatory mode common to the normal receptor and its ligand. Hydrophobic free energy eigenfunctions and their transformations offer new quantitative physical homologies in database searches for peptide-receptor matches.
Resumo:
Over four hundred years ago, Sir Walter Raleigh asked his mathematical assistant to find formulas for the number of cannonballs in regularly stacked piles. These investigations aroused the curiosity of the astronomer Johannes Kepler and led to a problem that has gone centuries without a solution: why is the familiar cannonball stack the most efficient arrangement possible? Here we discuss the solution that Hales found in 1998. Almost every part of the 282-page proof relies on long computer verifications. Random matrix theory was developed by physicists to describe the spectra of complex nuclei. In particular, the statistical fluctuations of the eigenvalues (“the energy levels”) follow certain universal laws based on symmetry types. We describe these and then discuss the remarkable appearance of these laws for zeros of the Riemann zeta function (which is the generating function for prime numbers and is the last special function from the last century that is not understood today.) Explaining this phenomenon is a central problem. These topics are distinct, so we present them separately with their own introductory remarks.
Resumo:
Type IV pili are thin filaments that extend from the poles of a diverse group of bacteria, enabling them to move at speeds of a few tenths of a micrometer per second. They are required for twitching motility, e.g., in Pseudomonas aeruginosa and Neisseria gonorrhoeae, and for social gliding motility in Myxococcus xanthus. Here we report direct observation of extension and retraction of type IV pili in P. aeruginosa. Cells without flagellar filaments were labeled with an amino-specific Cy3 fluorescent dye and were visualized on a quartz slide by total internal reflection microscopy. When pili were attached to a cell and their distal ends were free, they extended or retracted at rates of about 0.5 μm s−1 (29°C). They also flexed by Brownian motion, exhibiting a persistence length of about 5 μm. Frequently, the distal tip of a filament adsorbed to the substratum and the filament was pulled taut. From the absence of lateral deflections of such filaments, we estimate tensions of at least 10 pN. Occasionally, cell bodies came free and were pulled forward by pilus retraction. Thus, type IV pili are linear actuators that extend, attach at their distal tips, exert substantial force, and retract.
Resumo:
The term coparenting implies a bioparental dyad that often excludes the stepparent's role in sharing parenting across joint-custody households. Focusing solely on this dyad also precludes gaining an understanding of how stepfamily couples manage together the communication and sharing of parental responsibilities with the parent(s) in the shared children's other home. In a departure from this bioparental dyad-focused approach, this study locates the stepfamily couple at the center of an inquiry into managing coparenting across households. This mixed methods design study included in-depth interviews of 32 stepfamily couples whose narratives about coparenting were analyzed using grounded theory methods. Forty-one percent of stepparents engage in direct coparenting communication, sometimes manifested as the coactive approach identified in this study. Stepfamily couples also involve the stepparent indirectly in coparenting communication, through the conferred and consultative approaches. As well, the couples' narratives about coparenting identify them as either united, where they share the experience, or divided, where coparenting is reserved exclusively for the bioparent to manage. The stepfamily couples' narratives about significant coparenting experiences revealed that they experience and make sense of coparenting as 1) struggling, 2) coping, or 3) thriving. No significant relationship was found between marital satisfaction and experiencing coparenting as strugglers, copers or thrivers. Grounded theory analysis of these narratives also reflects the four dichotomous dimensions of 1) regard-disregard, 2) decency-duplicity, 3) facilitation-interference, and 4) accommodation-inflexibility. Significant incidents located along these dimensions contribute to the stepfamily couples' identification as struggling, coping, or thriving in coparenting. Experiences on the extreme ends of the dichotomous dimensions generate positive and negative turning points for the coparenting interactions and relationships. As well, experiences on the negative end of the dimensional poles can present challenges for the stepfamily couples. Finally, a synthesis of the findings related to the dichotomous dimensions generates a theory of shared parenting values expectancy.
Resumo:
In this paper, we introduce a formula for the exact number of zeros of every partial sum of the Riemann zeta function inside infinitely many rectangles of the critical strips where they are situated.
Resumo:
In this paper we provide the proof of a practical point-wise characterization of the set RP defined by the closure set of the real projections of the zeros of an exponential polynomial P(z) = Σn j=1 cjewjz with real frequencies wj linearly independent over the rationals. As a consequence, we give a complete description of the set RP and prove its invariance with respect to the moduli of the c′ js, which allows us to determine exactly the gaps of RP and the extremes of the critical interval of P(z) by solving inequations with positive real numbers. Finally, we analyse the converse of this result of invariance.
Resumo:
The Paleocene-Eocene Thermal Maximum (PETM) has been attributed to a rapid rise in greenhouse gas levels. If so, warming should have occurred at all latitudes, although amplified toward the poles. Existing records reveal an increase in high-latitude sea surface temperatures (SSTs) (8° to 10°C) and in bottom water temperatures (4° to 5°C). To date, however, the character of the tropical SST response during this event remains unconstrained. Here we address this deficiency by using paired oxygen isotope and minor element (magnesium/calcium) ratios of planktonic foraminifera from a tropical Pacific core to estimate changes in SST. Using mixed-layer foraminifera, we found that the combined proxies imply a 4° to 5°C rise in Pacific SST during the PETM. These results would necessitate a rise in atmospheric pCO2 to levels three to four times as high as those estimated for the late Paleocene.