956 resultados para Models for count data


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis the main objective is to examine and model configuration system and related processes. When and where configuration information is created in product development process and how it is utilized in order-delivery process? These two processes are the essential part of the whole configuration system from the information point of view. Empirical part of the work was done as a constructive research inside a company that follows a mass customization approach. Data models and documentation are created for different development stages of the configuration system. A base data model already existed for new structures and relations between these structures. This model was used as the basis for the later data modeling work. Data models include different data structures, their key objects and attributes, and relations between. Representation of configuration rules for the to-be configuration system was defined as one of the key focus point. Further, it is examined how the customer needs and requirements information can be integrated into the product development process. Requirements hierarchy and classification system is presented. It is shown how individual requirement specifications can be connected for physical design structure via features by developing the existing base data model further.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’apprentissage machine est un vaste domaine où l’on cherche à apprendre les paramètres de modèles à partir de données concrètes. Ce sera pour effectuer des tâches demandant des aptitudes attribuées à l’intelligence humaine, comme la capacité à traiter des don- nées de haute dimensionnalité présentant beaucoup de variations. Les réseaux de neu- rones artificiels sont un exemple de tels modèles. Dans certains réseaux de neurones dits profonds, des concepts "abstraits" sont appris automatiquement. Les travaux présentés ici prennent leur inspiration de réseaux de neurones profonds, de réseaux récurrents et de neuroscience du système visuel. Nos tâches de test sont la classification et le débruitement d’images quasi binaires. On permettra une rétroac- tion où des représentations de haut niveau (plus "abstraites") influencent des représentations à bas niveau. Cette influence s’effectuera au cours de ce qu’on nomme relaxation, des itérations où les différents niveaux (ou couches) du modèle s’interinfluencent. Nous présentons deux familles d’architectures, l’une, l’architecture complètement connectée, pouvant en principe traiter des données générales et une autre, l’architecture convolutionnelle, plus spécifiquement adaptée aux images. Dans tous les cas, les données utilisées sont des images, principalement des images de chiffres manuscrits. Dans un type d’expérience, nous cherchons à reconstruire des données qui ont été corrompues. On a pu y observer le phénomène d’influence décrit précédemment en comparant le résultat avec et sans la relaxation. On note aussi certains gains numériques et visuels en terme de performance de reconstruction en ajoutant l’influence des couches supérieures. Dans un autre type de tâche, la classification, peu de gains ont été observés. On a tout de même pu constater que dans certains cas la relaxation aiderait à apprendre des représentations utiles pour classifier des images corrompues. L’architecture convolutionnelle développée, plus incertaine au départ, permet malgré tout d’obtenir des reconstructions numériquement et visuellement semblables à celles obtenues avec l’autre architecture, même si sa connectivité est contrainte.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les facteurs de transcription sont des protéines spécialisées qui jouent un rôle important dans différents processus biologiques tel que la différenciation, le cycle cellulaire et la tumorigenèse. Ils régulent la transcription des gènes en se fixant sur des séquences d’ADN spécifiques (éléments cis-régulateurs). L’identification de ces éléments est une étape cruciale dans la compréhension des réseaux de régulation des gènes. Avec l’avènement des technologies de séquençage à haut débit, l’identification de tout les éléments fonctionnels dans les génomes, incluant gènes et éléments cis-régulateurs a connu une avancée considérable. Alors qu’on est arrivé à estimer le nombre de gènes chez différentes espèces, l’information sur les éléments qui contrôlent et orchestrent la régulation de ces gènes est encore mal définie. Grace aux techniques de ChIP-chip et de ChIP-séquençage il est possible d’identifier toutes les régions du génome qui sont liées par un facteur de transcription d’intérêt. Plusieurs approches computationnelles ont été développées pour prédire les sites fixés par les facteurs de transcription. Ces approches sont classées en deux catégories principales: les algorithmes énumératifs et probabilistes. Toutefois, plusieurs études ont montré que ces approches génèrent des taux élevés de faux négatifs et de faux positifs ce qui rend difficile l’interprétation des résultats et par conséquent leur validation expérimentale. Dans cette thèse, nous avons ciblé deux objectifs. Le premier objectif a été de développer une nouvelle approche pour la découverte des sites de fixation des facteurs de transcription à l’ADN (SAMD-ChIP) adaptée aux données de ChIP-chip et de ChIP-séquençage. Notre approche implémente un algorithme hybride qui combine les deux stratégies énumérative et probabiliste, afin d’exploiter les performances de chacune d’entre elles. Notre approche a montré ses performances, comparée aux outils de découvertes de motifs existants sur des jeux de données simulées et des jeux de données de ChIP-chip et de ChIP-séquençage. SAMD-ChIP présente aussi l’avantage d’exploiter les propriétés de distributions des sites liés par les facteurs de transcription autour du centre des régions liées afin de limiter la prédiction aux motifs qui sont enrichis dans une fenêtre de longueur fixe autour du centre de ces régions. Les facteurs de transcription agissent rarement seuls. Ils forment souvent des complexes pour interagir avec l’ADN pour réguler leurs gènes cibles. Ces interactions impliquent des facteurs de transcription dont les sites de fixation à l’ADN sont localisés proches les uns des autres ou bien médier par des boucles de chromatine. Notre deuxième objectif a été d’exploiter la proximité spatiale des sites liés par les facteurs de transcription dans les régions de ChIP-chip et de ChIP-séquençage pour développer une approche pour la prédiction des motifs composites (motifs composés par deux sites et séparés par un espacement de taille fixe). Nous avons testé ce module pour prédire la co-localisation entre les deux demi-sites ERE qui forment le site ERE, lié par le récepteur des œstrogènes ERα. Ce module a été incorporé à notre outil de découverte de motifs SAMD-ChIP.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Global Positioning System (GPS), with its high integrity, continuous availability and reliability, revolutionized the navigation system based on radio ranging. With four or more GPS satellites in view, a GPS receiver can find its location anywhere over the globe with accuracy of few meters. High accuracy - within centimeters, or even millimeters is achievable by correcting the GPS signal with external augmentation system. The use of satellite for critical application like navigation has become a reality through the development of these augmentation systems (like W AAS, SDCM, and EGNOS, etc.) with a primary objective of providing essential integrity information needed for navigation service in their respective regions. Apart from these, many countries have initiated developing space-based regional augmentation systems like GAGAN and IRNSS of India, MSAS and QZSS of Japan, COMPASS of China, etc. In future, these regional systems will operate simultaneously and emerge as a Global Navigation Satellite System or GNSS to support a broad range of activities in the global navigation sector.Among different types of error sources in the GPS precise positioning, the propagation delay due to the atmospheric refraction is a limiting factor on the achievable accuracy using this system. The WADGPS, aimed for accurate positioning over a large area though broadcasts different errors involved in GPS ranging including ionosphere and troposphere errors, due to the large temporal and spatial variations in different atmospheric parameters especially in lower atmosphere (troposphere), the use of these broadcasted tropospheric corrections are not sufficiently accurate. This necessitated the estimation of tropospheric error based on realistic values of tropospheric refractivity. Presently available methodologies for the estimation of tropospheric delay are mostly based on the atmospheric data and GPS measurements from the mid-latitude regions, where the atmospheric conditions are significantly different from that over the tropics. No such attempts were made over the tropics. In a practical approach when the measured atmospheric parameters are not available analytical models evolved using data from mid-latitudes for this purpose alone can be used. The major drawback of these existing models is that it neglects the seasonal variation of the atmospheric parameters at stations near the equator. At tropics the model underestimates the delay in quite a few occasions. In this context, the present study is afirst and major step towards the development of models for tropospheric delay over the Indian region which is a prime requisite for future space based navigation program (GAGAN and IRNSS). Apart from the models based on the measured surface parameters, a region specific model which does not require any measured atmospheric parameter as input, but depends on latitude and day of the year was developed for the tropical region with emphasis on Indian sector.Large variability of atmospheric water vapor content in short spatial and/or temporal scales makes its measurement rather involved and expensive. A local network of GPS receivers is an effective tool for water vapor remote sensing over the land. This recently developed technique proves to be an effective tool for measuring PW. The potential of using GPS to estimate water vapor in the atmosphere at all-weather condition and with high temporal resolution is attempted. This will be useful for retrieving columnar water vapor from ground based GPS data. A good network of GPS could be a major source of water vapor information for Numerical Weather Prediction models and could act as surrogate to the data gap in microwave remote sensing for water vapor over land.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die vorliegende Dissertation betrachtet institutionsinterne lokale (Critical-)Incident-Reporting-Systeme ((C)IRS) als eine Möglichkeit zum Lernen aus Fehlern und unerwünschten kritischen Ereignissen (sogenannte Incidents) im Krankenhaus. Die Notwendigkeit aus Incidents zu lernen, wird im Gesundheitswesen bereits seit den 1990er Jahren verstärkt diskutiert. Insbesondere risikoreichen Organisationen, in denen Incidents fatale Konsequenzen haben können, sollten umfassende Strategien erarbeiten, die sie vor Fehlern und unerwünschten Ereignissen schützen und diese als Lernpotenzial nutzen können. Dabei können lokale IRS als ein zentraler Bestandteil des Risikomanagements und freiwillige Dokumentationssysteme im Krankenhaus ein Teil dieser Strategie sein. Sie können eine Ausgangslage für die systematische Erfassung und Auswertung von individuellen Lerngelegenheiten und den Transfer zurück in die Organisation schaffen. Hierfür sind eine lernförderliche Gestaltung, Implementierung und Einbettung lokaler IRS eine wichtige Voraussetzung. Untersuchungen über geeignete lerntheoretisch fundierte und wirkungsvolle IRS-Modelle und empirische Daten fehlen bisher im deutschsprachigen Raum. Einen entsprechenden Beitrag leistet die vorliegende Fallstudie in einem Schweizer Universitätsspital (800 Betten, 6.100 Mitarbeitende). Zu diesem Zweck wurde zuerst ein Anforderungsprofil an lernförderliche IRS aus der Literatur abgeleitet. Dieses berücksichtigt zum einen literaturbasierte Kriterien für die Gestaltung und Nutzung aus der IRS-Literatur, zum anderen die aus der Erziehungswissenschaft und Arbeitspsychologie entlehnten Gestaltungsbedingungen und Erfolgskriterien an organisationales Lernen. Das Anforderungsprofil wurde in drei empirischen Teilstudien validiert und entsprechend adaptiert. In der ersten empirischen Teilstudie erfolgte eine Standortbestimmung der lokalen IRS. Die Erhebung erfolgte in vier Kliniken mittels Dokumentenanalyse, leitfadengestützter Interviews (N=18), sieben strukturierter Gruppendiskussionen und teilnehmender Beobachtungen über einen Zeitraum von 22 Monaten. Erfolgskritische IRS-Merkmale wurden identifiziert mit dem Ziel einer praxisgerechten lernförderlichen Systemgestaltung und Umsetzung von Incident Reporting unter Betrachtung von organisationalen Rahmenbedingungen, Lernpotenzialen und Barrieren. Die zweite Teilstudie untersuchte zwei Fallbeispiele organisationalen Lernens mittels Prozessbegleitung, welche zu einem verwechslungssicheren Design bei einem Medizinalprodukt und einer verbesserten Patientenidentifikation in Zusammenhang mit Blutentnahmen führten. Für das organisationale Lernen im Spital wurden dabei Chancen, Barrieren und Gestaltungsansätze abgeleitet, wie erwünschte Veränderungen und Lernen unter Nutzung von IRS initiiert werden können und dabei ein besseres Gesundheitsresultat erreicht werden kann. Die dritte Teilstudie überprüfte, inwiefern die Nutzung und Implementierung lokaler IRS mittels einer Mitarbeitervollbefragung zur Sicherheitskultur gefördert werden kann. Hierfür wurde eine positive Interaktion, zwischen einer starken Sicherheitskultur und der Bereitschaft ein IRS zu implementieren und Incidents zu berichten, angenommen. Zum Einsatz kam eine deutschsprachige Version des Hospital Survey on Patient Safety Culture (Patientensicherheitsklimainventar) mit einem Rücklauf von 46.8% (2.897 gültige Fragebogen). In 23 von 37 Kliniken führte laut einer Nachbefragung die Sicherheitskulturbefragung zum Implementierungsentscheid. Dies konnte durch Monitoring der IRS-Nutzung bestätigt werden. Erstmals liegen mit diesen Studien empirische Daten für eine wirkungsvolle und lernförderliche Gestaltung und Umsetzung von lokalen IRS am Beispiel einer Schweizer Gesundheitsorganisation vor. Die Ergebnisse der Arbeit zeigen Chancen und Barrieren für IRS als Berichts- und Lernsysteme im Krankenhaus auf. Als Resultat unsachgemäss gestalteter und implementierter IRS konnte dabei vor allem Lernverhinderung infolge IRS aufgezeigt werden. Blinder Aktionismus und eine fehlende Priorisierung von Patientensicherheit, unzureichende Kompetenzen, Qualifikationen und Ressourcen führten dabei zur Schaffung neuer Fehlerquellen mit einer Verstärkung des Lernens erster Ordnung. Eine lernförderliche Gestaltung und Unterhaltung der lokalen IRS, eingebettet in eine klinikumsweite Qualitäts- und Patientensicherheitsstrategie, erwiesen sich hingegen als wirkungsvoll im Sinne eines organisationalen Lernens und eines kontinuierlichen Verbesserungsprozesses. Patientensicherheitskulturbefragungen erwiesen sich zudem bei entsprechender Einbettung als effektives Instrument, um die Implementierung von IRS zu fördern. Zwölf Thesen zeigen in verdichteter Form auf, welche Gestaltungsprinzipien für IRS als Instrument des organisationalen Lernens im Rahmen des klinischen Risikomanagements und zur Förderung einer starken Patientensicherheitskultur zu berücksichtigen sind. Die Erkenntnisse aus den empirischen Studien münden in ein dialogorientiertes Rahmenmodell organisationalen Lernens unter Nutzung lokaler IRS. Die Arbeit zeigt damit zum einen Möglichkeiten für ein Lernen auf den verschiedenen Ebenen der Organisation auf und weist auf die Notwendigkeit einer (Re-)Strukturierung der aktuellen IRS-Diskussion hin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The statistical analysis of literary style is the part of stylometry that compares measurable characteristics in a text that are rarely controlled by the author, with those in other texts. When the goal is to settle authorship questions, these characteristics should relate to the author’s style and not to the genre, epoch or editor, and they should be such that their variation between authors is larger than the variation within comparable texts from the same author. For an overview of the literature on stylometry and some of the techniques involved, see for example Mosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) or Lebart, Salem and Berry (1998). Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be “the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writters like Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translated several times into Spanish, Italian and French, with modern English translations by Rosenthal (1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465, but it was not printed until 1490. There is an intense and long lasting debate around its authorship sprouting from its first edition, where its introduction states that the whole book is the work of Martorell (1413?-1468), while at the end it is stated that the last one fourth of the book is by Galba (?-1490), after the death of Martorell. Some of the authors that support the theory of single authorship are Riquer (1990), Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer (1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990). Neither of the two candidate authors left any text comparable to the one under study, and therefore discriminant analysis can not be used to help classify chapters by author. By using sample texts encompassing about ten percent of the book, and looking at word length and at the use of 44 conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that might indicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba and Ginebra (2000) estimates that stylistic boundary to be near chapter 383. Following the lead of the extensive literature, this paper looks into word length, the use of the most frequent words and into the use of vowels in each chapter of the book. Given that the features selected are categorical, that leads to three contingency tables of ordered rows and therefore to three sequences of multinomial observations. Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3 describes the problem of the estimation of a suden change-point in those sequences, in the following sections we propose various ways to estimate change-points in multinomial sequences; the method in section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma models onto the sequence of Chi-square distances between each row profiles and the average profile, the one in Section 6 fits models onto the sequence of values taken by the first component of the correspondence analysis as well as onto sequences of other summary measures like the average word length. In Section 7 we fit models onto the marginal binomial sequences to identify the features that distinguish the chapters before and after that boundary. Most methods rely heavily on the use of generalized linear models

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this work is to study if children’s perception of parental relationship and parental empathy can predict prosocial behaviour during childhood. The sample was composed of 934 Argentine children, aged 9 to 12, of middle socio-economical level. The participants completed Argentine scale of Children Perception of Parental relationship (richaud de Minzi, 2007), an Argentine adaptation of scale of Prosocial Behaviour (Caprara and Pastorelli, 1993) and a questionnaire to measure children’s perception of parental empathy (richaud de Minzi, 2006). structural equations modelling (sEM) analyses were conducted to explore our hypotheses. Six theoretical models fit the data very well. The results showed that parental styles of acceptance and pathological control impact on children prosocial behavior. Children’s perception of parental empathy was positively associated with children prosocial behavior. Finally, parental acceptance and pathological control were associated with children’s perception of parental empathy, but negligent parental behaviour did not.  

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Resumen: Este trabajo estudia los resultados en matemáticas y lenguaje de 32000 estudiantes en la prueba saber 11 del 2008, de la ciudad de Bogotá. Este análisis reconoce que los individuos se encuentran contenidos en barrios y colegios, pero no todos los individuos del mismo barrio asisten a la misma escuela y viceversa. Con el fin de modelar esta estructura de datos se utilizan varios modelos econométricos, incluyendo una regresión jerárquica multinivel de efectos cruzados. Nuestro objetivo central es identificar en qué medida y que condiciones del barrio y del colegio se correlacionan con los resultados educacionales de la población objetivo y cuáles características de los barrios y de los colegios están más asociadas al resultado en las pruebas. Usamos datos de la prueba saber 11, del censo de colegios c600, del censo poblacional del 2005 y de la policía metropolitana de Bogotá. Nuestras estimaciones muestran que tanto el barrio como el colegio están correlacionados con los resultados en las pruebas; pero el efecto del colegio parece ser mucho más fuerte que el del barrio. Las características del colegio que están más asociadas con el resultado en las pruebas son la educación de los profesores, la jornada, el valor de la pensión, y el contexto socio económico del colegio. Las características de los barrios más asociadas con el resultado en las pruebas son, la presencia de universitarios en la UPZ, un clúster de altos niveles de educación y nivel de crimen en el barrio que se correlaciona negativamente. Los resultados anteriores fueron hallados teniendo en cuenta controles familiares y personales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La dependencia entre las series financieras, es un parámetro fundamental para la estimación de modelos de Riesgo. El Valor en Riesgo (VaR) es una de las medidas más importantes utilizadas para la administración y gestión de Riesgos Financieros, en la actualidad existen diferentes métodos para su estimación, como el método por simulación histórica, el cual no asume ninguna distribución sobre los retornos de los factores de riesgo o activos, o los métodos paramétricos que asumen normalidad sobre las distribuciones. En este documento se introduce la teoría de cópulas, como medida de dependencia entre las series, se estima un modelo ARMA-GARCH-Cópula para el cálculo del Valor en Riesgo de un portafolio compuesto por dos series financiera, la tasa de cambio Dólar-Peso y Euro-Peso. Los resultados obtenidos muestran que la estimación del VaR por medio de copulas es más preciso en relación a los métodos tradicionales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nas últimas décadas tem-se verificado um aumento bastante acentuado da utilização de modelos in vitro e in silico para a obtenção de dados que permitem aumentar a eficácia de programas de desenvolvimento de novos fármacos. Actualmente são utilizados dois tipos de modelos para descrever a farmacocinética de compostos químicos em função do tempo: modelos empíricos farmacocinéticos e modelos farmacocinéticos baseados na fisiologia (PBPK). Modelos PBPK assumem que o corpo humano interage com os compostos químicos como um sistema integrado, pelo que um evento que ocorre numa zona do corpo poderá influenciar um evento a ocorrer noutra zona, aparentemente distinta. Estes modelos assumem que o organismo humano é constituído por “compartimentos” que representam fisiologicamente os órgãos, tecidos e outros espaços fisiológicos. Para a correcta utilização destes modelos é necessário determinar a estrutura do modelo e as suas características, escrever equações que o caracterizem, bem como definir e estimar os parâmetros deste. Estes modelos têm vindo a ser usados no campo toxicológico e farmacêutico cada vez com maior frequência. No futuro poder-se-ão transformar numa ferramenta universal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ascertaining the location of palaeo-ice streams is crucial in order to produce accurate reconstructions of palaeo-ice sheets and examine interactions with the ocean-climate system. This paper reports evidence for a major ice stream in Amundsen Gulf, Canadian Arctic Archipelago. Mapping from satellite imagery (Landsat ETM+) and digital elevation models, including bathymetric data, is used to reconstruct flow-patterns on southwestern Victoria Island and the adjacent mainland (Nunavut and Northwest Territories). Several flow-sets indicative of ice streaming are found feeding into the marine trough and cross-cutting relationships between these flow-sets (and utilising previously published radiocarbon dates) reveal several phases of ice stream activity centred in Amundsen Gulf and Dolphin and Union Strait. A large erosional footprint on the continental shelf indicates that the ice stream (ca. 1000 km long and ca. 150 km wide) filled Amundsen Gulf, probably at the Last Glacial Maximum. Subsequent to this, the ice stream reorganised as the margin retreated back along the marine trough, eventually splitting into two separate low-gradient lobes in Prince Albert Sound and Dolphin and Union Strait. The location of this major ice stream holds important implications for ice sheet-ocean interactions and specifically, the development of Arctic Ocean ice shelves and the delivery of icebergs into the western Arctic Ocean during the late Pleistocene. Copyright (C) 2006 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The serum peptidome may be a valuable source of diagnostic cancer biomarkers. Previous mass spectrometry (MS) studies have suggested that groups of related peptides discriminatory for different cancer types are generated ex vivo from abundant serum proteins by tumor-specific exopeptidases. We tested 2 complementary serum profiling strategies to see if similar peptides could be found that discriminate ovarian cancer from benign cases and healthy controls. METHODS: We subjected identically collected and processed serum samples from healthy volunteers and patients to automated polypeptide extraction on octadecylsilane-coated magnetic beads and separately on ZipTips before MALDI-TOF MS profiling at 2 centers. The 2 platforms were compared and case control profiling data analyzed to find altered MS peak intensities. We tested models built from training datasets for both methods for their ability to classify a blinded test set. RESULTS: Both profiling platforms had CVs of approximately 15% and could be applied for high-throughput analysis of clinical samples. The 2 methods generated overlapping peptide profiles, with some differences in peak intensity in different mass regions. In cross-validation, models from training data gave diagnostic accuracies up to 87% for discriminating malignant ovarian cancer from healthy controls and up to 81% for discriminating malignant from benign samples. Diagnostic accuracies up to 71% (malignant vs healthy) and up to 65% (malignant vs benign) were obtained when the models were validated on the blinded test set. CONCLUSIONS: For ovarian cancer, altered MALDI-TOF MS peptide profiles alone cannot be used for accurate diagnoses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.