931 resultados para Model comparison
Resumo:
Das wichtigste Oxidationsmittel für den Abbau flüchtiger Kohlenwasserstoffverbindungen (VOC, engl.: volatile organic compounds) in der Atmosphäre ist das Hydroxylradikal (OH), welches sich in einem schnellen chemischen Gleichgewicht mit dem Hydroperoxylradical (HO2) befindet. Bisherige Messungen und Modellvergleiche dieser Radikalspezies in Waldgebieten haben signifikante Lücken im Verständnis der zugrundeliegenden Prozesse aufgezeigt.rnIm Rahmen dieser Doktorarbeit wurden Messungen von OH- und HO2-Radikalen mittelsrnlaserinduzierten Fluoreszensmesstechnik (LIF, engl.: laser-induced fluorescence) in einem Nadelwald in Süd-Finnland während der Messkampagne HUMPPA–COPEC–2010 (Hyytiälä United Measurements of Photochemistry and Particles in Air – Comprehensive Organic Precursor Emission and Concentration study) im Sommer 2010 durchgeführt. Unterschiedliche Komponenten des LIF-Instruments wurden verbessert. Eine modifizierte Methode zur Bestimmung des Hintergrundsignals (engl.: InletPreInjector technique) wurde in den Messaufbaurnintegriert und erstmals zur Messung von atmosphärischem OH verwendet. Vergleichsmessungen zweier Instrumente basierend auf unterschiedlichen Methoden zur Messung von OH-Radikalen, chemische Ionisationsmassenspektrometrie (CIMS - engl.: chemical ionization mass spectrometry) und LIF-Technik, zeigten eine gute Übereinstimmung. Die Vergleichsmessungen belegen das Vermögen und die Leistungsfähigkeit des modifizierten LIF-Instruments atmosphärische OH Konzentrationen akkurat zu messen. Nachfolgend wurde das LIF-Instrument auf der obersten Plattform eines 20m hohen Turmes positioniert, um knapp oberhalb der Baumkronen die Radikal-Chemie an der Schnittstelle zwischen Ökosystem und Atmosphäre zu untersuchen. Umfangreiche Messungen - dies beinhaltet Messungen der totalen OH-Reaktivität - wurden durchgeführt und unter Verwendung von Gleichgewichtszustandsberechnungen und einem Boxmodell, in welches die gemessenen Daten als Randbedingungen eingehen, analysiert. Wenn moderate OH-Reaktivitäten(k′(OH)≤ 15 s−1) vorlagen, sind OH-Produktionsraten, die aus gemessenen Konzentrationen von OH-Vorläuferspezies berechnet wurden, konsistent mit Produktionsraten, die unter der Gleichgewichtsannahme von Messungen des totalen OH Verlustes abgeleitet wurden. Die primären photolytischen OH-Quellen tragen mit einem Anteil von bis zu einem Drittel zur Gesamt-OH-Produktion bei. Es wurde gezeigt, dass OH-Rezyklierung unter Bedingungen moderater OH-Reaktivität hauptsächlich durch die Reaktionen von HO2 mit NO oder O3 bestimmt ist. Während Zeiten hoher OH-Reaktivität (k′(OH) > 15 s−1) wurden zusätzliche Rezyklierungspfade, die nicht über die Reaktionen von HO2 mit NO oder O3, sondern direkt OH bilden, aufgezeigt.rnFür Hydroxylradikale stimmen Boxmodell-Simulationen und Messungen gut übereinrn(OHmod/OHobs=1.04±0.16), während HO2-Mischungsverhältnisse in der Simulation signifikant unterschätzt werden (HO2mod/HO2obs=0.3±0.2) und die simulierte OH-Reaktivität nicht mit der gemessenen OH-Reaktivität übereinstimmt. Die gleichzeitige Unterschätzung der HO2-Mischungsverhältnisse und der OH-Reaktivität, während OH-Konzentrationen von der Simulation gut beschrieben werden, legt nahe, dass die fehlende OH-Reaktivität in der Simulation eine noch unberücksichtigte HO2-Quelle darstellt. Zusätzliche, OH-unabhängigernRO2/HO2-Quellen, wie z.B. der thermische Zerfall von herantransportiertem peroxyacetylnitrat (PAN) und die Photolyse von Glyoxal sind indiziert.
Resumo:
Background As predicted by theory, traits associated with reproduction often evolve at a comparatively high speed. This is especially the case for courtship behaviour which plays a central role in reproductive isolation. On the other hand, courtship behavioural traits often involve morphological and behavioural adaptations in both sexes; this suggests that their evolution might be under severe constraints, for instance irreversibility of character loss. Here, we use a recently proposed method to retrieve data on a peculiar courtship behavioural trait, i.e. antennal coiling, for 56 species of diplazontine parasitoid wasps. On the basis of a well-resolved phylogeny, we reconstruct the evolutionary history of antennal coiling and associated morphological modifications to study the mode of evolution of this complex character system. Results Our study reveals a large variation in shape, location and ultra-structure of male-specific modifications on the antennae. As for antennal coiling, we find either single-coiling, double-coiling or the absence of coiling; each state is present in multiple genera. Using a model comparison approach, we show that the possession of antennal modifications is highly correlated with antennal coiling behaviour. Ancestral state reconstruction shows that both antennal modifications and antennal coiling are highly congruent with the molecular phylogeny, implying low levels of homoplasy and a comparatively low speed of evolution. Antennal coiling is lost on two independent occasions, and never reacquired. A zero rate of regaining antennal coiling is supported by maximum parsimony, maximum likelihood and Bayesian approaches. Conclusions Our study provides the first comparative evidence for a tight correlation between male-specific antennal modifications and the use of the antennae during courtship. Antennal coiling in Diplazontinae evolved at a comparatively low rate, and was never reacquired in any of the studied taxa. This suggests that the loss of antennal coiling is irreversible on the timescale examined here, and therefore that evolutionary constraints have greatly influenced the evolution of antennal courtship in this group of parasitoid wasps. Further studies are needed to ascertain whether the loss of antennal coiling is irreversible on larger timescales, and whether evolutionary constraints have influenced courtship behavioural traits in a similar way in other groups.
Resumo:
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs.
Resumo:
We integrated research on the dimensionality of career success into social-cognitive career theory and explored the positive feedback loop between occupational self-efficacy and objective and subjective career success over time (self-efficacy → objective success → subjective success → self-efficacy). Furthermore, we theoretically accounted for synchronous and time-lagged effects, as well as indirect reciprocity between the variables. We tested the proposed model by means of longitudinal structural equation modeling in a 9-year four-wave panel design, by applying a model comparison approach and indirect effect analyses (N = 608 professionals). The findings supported the proposed positive feedback loop between occupational self-efficacy and career success. Supporting our time-based reasoning, the findings showed that unfolding effects between occupational self-efficacy and objective career success take more time (i.e., time-lagged or over time) than unfolding effects between objective and subjective career success, as well as between subjective career success and occupational self-efficacy (i.e., synchronous or concurrently). Indirect effects of past on future occupational self-efficacy via objective and subjective career success were significant, providing support for an indirect reciprocity model. Results are discussed with respect to extensions of social-cognitive career theory and occupational self-efficacy development over time.
Resumo:
We report the intercalibration of paleomagnetic secular variation (PSV) and radiocarbon dates of two expanded postglacial sediment cores from geographically proximal, but oceanographically and sedimentologically contrasting settings. The objective is to improve relative correlation and chronology over what can be achieved with either method alone. Core MD99-2269 was taken from the Húnaflóaáll Trough on the north Iceland shelf. Core MD99-2322 was collected from the Kangerlussuaq Trough on the east Greenland margin. Both cores are well dated, with 27 and 20 accelerator mass spectrometry 14C dates for cores 2269 and 2322, respectively. Paleomagnetic measurements made on u channel samples document a strong, stable, single-component magnetization. The temporal similarities of paleomagnetic inclination and declination records are shown using each core's independent calibrated radiocarbon age model. Comparison of the PSV records reveals that the relative correlation between the two cores could be further improved. Starting in the depth domain, tie points initially based on calibrated 14C dates are either adjusted or added to maximize PSV correlations. Radiocarbon dates from both cores are then combined on a common depth scale resulting from the PSV correlation. Support for the correlation comes from the consistent interweaving of dates, correct alignment of the Saksunarvatn tephra, and the improved correlation of paleoceanographic proxy data (percent carbonate). These results demonstrate that PSV correlation used in conjunction with 14C dates can improve relative correlation and also regional chronologies by allowing dates from various stratigraphic sequences to be combined into a single, higher dating density, age-to-depth model.
Resumo:
We use the fully coupled atmosphere-ocean three-dimensional model of intermediate complexity iLOVECLIM to simulate the climate and oxygen stable isotopic signal during the Last Glacial Maximum (LGM, 21 000 yr). By using a model that is able to explicitly simulate the sensor (d18O), results can be directly compared with data from climatic archives in the different realms. Our results indicate that iLOVECLIM reproduces well the main feature of the LGM climate in the atmospheric and oceanic components. The annual mean d18O in precipitation shows more depleted values in the northern and southern high latitudes during the LGM. The model reproduces very well the spatial gradient observed in ice core records over the Greenland ice-sheet. We observe a general pattern toward more enriched values for continental calcite d18O in the model at the LGM, in agreement with speleothem data. This can be explained by both a general atmospheric cooling in the tropical and subtropical regions and a reduction in precipitation as confirmed by reconstruction derived from pollens and plant macrofossils. Data-model comparison for sea surface temperature indicates that iLOVECLIM is capable to satisfyingly simulate the change in oceanic surface conditions between the LGM and present. Our data-model comparison for calcite d18O allows investigating the large discrepancies with respect to glacial temperatures recorded by different microfossil proxies in the North Atlantic region. The results argue for a trong mean annual cooling between the LGM and present (>6°C), supporting the foraminifera transfer function reconstruction but in disagreement with alkenones and dinocyst reconstructions. The data-model comparison also reveals that large positive calcite d18O anomaly in the Southern Ocean may be explained by an important cooling, although the driver of this pattern is unclear. We deduce a large positive d18Osw anomaly for the north Indian Ocean that contrasts with a large negative d18Osw anomaly in the China Sea between the LGM and present. This pattern may be linked to changes in the hydrological cycle over these regions. Our simulation of the deep ocean suggests that changes in d18Osw between the LGM and present are not spatially homogenous. This is supported by reconstructions derived from pore fluids in deep-sea sediments. The model underestimates the deep ocean cooling thus biasing the comparison with benthic calcite d18O data. Nonetheless, our data-model comparison support a heterogeneous cooling of few degrees (2-4°C) in the LGM Ocean.
Resumo:
Accumulating evidence suggests a role for the medial temporal lobe (MTL) in working memory (WM). However, little is known concerning its functional interactions with other cortical regions in the distributed neural network subserving WM. To reveal these, we availed of subjects with MTL damage and characterized changes in effective connectivity while subjects engaged in WM task. Specifically, we compared dynamic causal models, extracted from magnetoencephalographic recordings during verbal WM encoding, in temporal lobe epilepsy patients (with left hippocampal sclerosis) and controls. Bayesian model comparison indicated that the best model (across subjects) evidenced bilateral, forward, and backward connections, coupling inferior temporal cortex (ITC), inferior frontal cortex (IFC), and MTL. MTL damage weakened backward connections from left MTL to left ITC, a decrease accompanied by strengthening of (bidirectional) connections between IFC and MTL in the contralesional hemisphere. These findings provide novel evidence concerning functional interactions between nodes of this fundamental cognitive network and sheds light on how these interactions are modified as a result of focal damage to MTL. The findings highlight that a reduced (top-down) influence of the MTL on ipsilateral language regions is accompanied by enhanced reciprocal coupling in the undamaged hemisphere providing a first demonstration of “connectional diaschisis.”
Resumo:
Los accidentes del tráfico son un fenómeno social muy relevantes y una de las principales causas de mortalidad en los países desarrollados. Para entender este fenómeno complejo se aplican modelos econométricos sofisticados tanto en la literatura académica como por las administraciones públicas. Esta tesis está dedicada al análisis de modelos macroscópicos para los accidentes del tráfico en España. El objetivo de esta tesis se puede dividir en dos bloques: a. Obtener una mejor comprensión del fenómeno de accidentes de trafico mediante la aplicación y comparación de dos modelos macroscópicos utilizados frecuentemente en este área: DRAG y UCM, con la aplicación a los accidentes con implicación de furgonetas en España durante el período 2000-2009. Los análisis se llevaron a cabo con enfoque frecuencista y mediante los programas TRIO, SAS y TRAMO/SEATS. b. La aplicación de modelos y la selección de las variables más relevantes, son temas actuales de investigación y en esta tesis se ha desarrollado y aplicado una metodología que pretende mejorar, mediante herramientas teóricas y prácticas, el entendimiento de selección y comparación de los modelos macroscópicos. Se han desarrollado metodologías tanto para selección como para comparación de modelos. La metodología de selección de modelos se ha aplicado a los accidentes mortales ocurridos en la red viaria en el período 2000-2011, y la propuesta metodológica de comparación de modelos macroscópicos se ha aplicado a la frecuencia y la severidad de los accidentes con implicación de furgonetas en el período 2000-2009. Como resultado de los desarrollos anteriores se resaltan las siguientes contribuciones: a. Profundización de los modelos a través de interpretación de las variables respuesta y poder de predicción de los modelos. El conocimiento sobre el comportamiento de los accidentes con implicación de furgonetas se ha ampliado en este proceso. bl. Desarrollo de una metodología para selección de variables relevantes para la explicación de la ocurrencia de accidentes de tráfico. Teniendo en cuenta los resultados de a) la propuesta metodológica se basa en los modelos DRAG, cuyos parámetros se han estimado con enfoque bayesiano y se han aplicado a los datos de accidentes mortales entre los años 2000-2011 en España. Esta metodología novedosa y original se ha comparado con modelos de regresión dinámica (DR), que son los modelos más comunes para el trabajo con procesos estocásticos. Los resultados son comparables, y con la nueva propuesta se realiza una aportación metodológica que optimiza el proceso de selección de modelos, con escaso coste computacional. b2. En la tesis se ha diseñado una metodología de comparación teórica entre los modelos competidores mediante la aplicación conjunta de simulación Monte Cario, diseño de experimentos y análisis de la varianza ANOVA. Los modelos competidores tienen diferentes estructuras, que afectan a la estimación de efectos de las variables explicativas. Teniendo en cuenta el estudio desarrollado en bl) este desarrollo tiene el propósito de determinar como interpretar la componente de tendencia estocástica que un modelo UCM modela explícitamente, a través de un modelo DRAG, que no tiene un método específico para modelar este elemento. Los resultados de este estudio son importantes para ver si la serie necesita ser diferenciada antes de modelar. b3. Se han desarrollado nuevos algoritmos para realizar los ejercicios metodológicos, implementados en diferentes programas como R, WinBUGS, y MATLAB. El cumplimiento de los objetivos de la tesis a través de los desarrollos antes enunciados se remarcan en las siguientes conclusiones: 1. El fenómeno de accidentes del tráfico se ha analizado mediante dos modelos macroscópicos. Los efectos de los factores de influencia son diferentes dependiendo de la metodología aplicada. Los resultados de predicción son similares aunque con ligera superioridad de la metodología DRAG. 2. La metodología para selección de variables y modelos proporciona resultados prácticos en cuanto a la explicación de los accidentes de tráfico. La predicción y la interpretación también se han mejorado mediante esta nueva metodología. 3. Se ha implementado una metodología para profundizar en el conocimiento de la relación entre las estimaciones de los efectos de dos modelos competidores como DRAG y UCM. Un aspecto muy importante en este tema es la interpretación de la tendencia mediante dos modelos diferentes de la que se ha obtenido información muy útil para los investigadores en el campo del modelado. Los resultados han proporcionado una ampliación satisfactoria del conocimiento en torno al proceso de modelado y comprensión de los accidentes con implicación de furgonetas y accidentes mortales totales en España. ABSTRACT Road accidents are a very relevant social phenomenon and one of the main causes of death in industrialized countries. Sophisticated econometric models are applied in academic work and by the administrations for a better understanding of this very complex phenomenon. This thesis is thus devoted to the analysis of macro models for road accidents with application to the Spanish case. The objectives of the thesis may be divided in two blocks: a. To achieve a better understanding of the road accident phenomenon by means of the application and comparison of two of the most frequently used macro modelings: DRAG (demand for road use, accidents and their gravity) and UCM (unobserved components model); the application was made to van involved accident data in Spain in the period 2000-2009. The analysis has been carried out within the frequentist framework and using available state of the art software, TRIO, SAS and TRAMO/SEATS. b. Concern on the application of the models and on the relevant input variables to be included in the model has driven the research to try to improve, by theoretical and practical means, the understanding on methodological choice and model selection procedures. The theoretical developments have been applied to fatal accidents during the period 2000-2011 and van-involved road accidents in 2000-2009. This has resulted in the following contributions: a. Insight on the models has been gained through interpretation of the effect of the input variables on the response and prediction accuracy of both models. The behavior of van-involved road accidents has been explained during this process. b1. Development of an input variable selection procedure, which is crucial for an efficient choice of the inputs. Following the results of a) the procedure uses the DRAG-like model. The estimation is carried out within the Bayesian framework. The procedure has been applied for the total road accident data in Spain in the period 2000-2011. The results of the model selection procedure are compared and validated through a dynamic regression model given that the original data has a stochastic trend. b2. A methodology for theoretical comparison between the two models through Monte Carlo simulation, computer experiment design and ANOVA. The models have a different structure and this affects the estimation of the effects of the input variables. The comparison is thus carried out in terms of the effect of the input variables on the response, which is in general different, and should be related. Considering the results of the study carried out in b1) this study tries to find out how a stochastic time trend will be captured in DRAG model, since there is no specific trend component in DRAG. Given the results of b1) the findings of this study are crucial in order to see if the estimation of data with stochastic component through DRAG will be valid or whether the data need a certain adjustment (typically differencing) prior to the estimation. The model comparison methodology was applied to the UCM and DRAG models, considering that, as mentioned above, the UCM has a specific trend term while DRAG does not. b3. New algorithms were developed for carrying out the methodological exercises. For this purpose different softwares, R, WinBUGs and MATLAB were used. These objectives and contributions have been resulted in the following findings: 1. The road accident phenomenon has been analyzed by means of two macro models: The effects of the influential input variables may be estimated through the models, but it has been observed that the estimates vary from one model to the other, although prediction accuracy is similar, with a slight superiority of the DRAG methodology. 2. The variable selection methodology provides very practical results, as far as the explanation of road accidents is concerned. Prediction accuracy and interpretability have been improved by means of a more efficient input variable and model selection procedure. 3. Insight has been gained on the relationship between the estimates of the effects using the two models. A very relevant issue here is the role of trend in both models, relevant recommendations for the analyst have resulted from here. The results have provided a very satisfactory insight into both modeling aspects and the understanding of both van-involved and total fatal accidents behavior in Spain.
Resumo:
O objetivo dessa pesquisa foi avaliar aspectos genéticos que relacionados à produção in vitro de embriões na raça Guzerá. O primeiro estudo focou na estimação de (co) variâncias genéticas e fenotípicas em características relacionadas a produção de embriões e na detecção de possível associação com a idade ao primeiro parto (AFC). Foi detectada baixa e média herdabilidade para características relacionadas à produção de oócitos e embriões. Houve fraca associação genética entre características ligadas a reprodução artificial e a idade ao primeiro parto. O segundo estudo avaliou tendências genéticas e de endogamia em uma população Guzerá no Brasil. Doadoras e embriões produzidos in vitro foram considerados como duas subpopulações de forma a realizar comparações acerca das diferenças de variação anual genética e do coeficiente de endogamia. A tendência anual do coeficiente de endogamia (F) foi superior para a população geral, sendo detectado efeito quadrático. No entanto, a média de F para a sub- população de embriões foi maior do que na população geral e das doadoras. Foi observado ganho genético anual superior para a idade ao primeiro parto e para a produção de leite (305 dias) entre embriões produzidos in vitro do que entre doadoras ou entre a população geral. O terceiro estudo examinou os efeitos do coeficiente de endogamia da doadora, do reprodutor (usado na fertilização in vitro) e dos embriões sobre resultados de produção in vitro de embriões na raça Guzerá. Foi detectado efeito da endogamia da doadora e dos embriões sobre as características estudadas. O quarto (e último) estudo foi elaborado para comparar a adequação de modelos mistos lineares e generalizados sob método de Máxima Verossimilhança Restrita (REML) e sua adequação a variáveis discretas. Quatro modelos hierárquicos assumindo diferentes distribuições para dados de contagem encontrados no banco. Inferência foi realizada com base em diagnósticos de resíduo e comparação de razões entre componentes de variância para os modelos em cada variável. Modelos Poisson superaram tanto o modelo linear (com e sem transformação da variável) quanto binomial negativo à qualidade do ajuste e capacidade preditiva, apesar de claras diferenças observadas na distribuição das variáveis. Entre os modelos testados, a pior qualidade de ajuste foi obtida para o modelo linear mediante transformação logarítmica (Log10 X +1) da variável resposta.
Resumo:
Los métodos de máxima verosimilitud (MMV) ofrecen un marco alternativo a la estadística frecuentista convencional, alejándose del uso del p-valor para el rechazo de una única hipótesis nula y optando por el uso de las verosimilitudes para evaluar el grado de apoyo en los datos a un conjunto de hipótesis alternativas (o modelos) de interés para el investigador. Estos métodos han sido ampliamente aplicados en ecología en el marco de los modelos de vecindad. Dichos modelos usan una aproximación espacialmente explícita para describir procesos demográficos de plantas o procesos ecosistémicos en función de los atributos de los individuos vecinos. Se trata por tanto de modelos fenomenológicos cuya principal utilidad radica en funcionar como herramientas de síntesis de los múltiples mecanismos por los que las especies pueden interactuar e influenciar su entorno, proporcionando una medida del efecto per cápita de individuos de distintas características (ej. tamaño, especie, rasgos fisiológicos) sobre los procesos de interés. La gran ventaja de aplicar los MMV en el marco de los modelos de vecindad es que permite ajustar y comparar múltiples modelos que usen distintos atributos de los vecinos y/o formas funcionales para seleccionar aquel con mayor soporte empírico. De esta manera, cada modelo funcionará como un “experimento virtual” para responder preguntas relacionadas con la magnitud y extensión espacial de los efectos de distintas especies coexistentes, y extraer conclusiones sobre posibles implicaciones para el funcionamiento de comunidades y ecosistemas. Este trabajo sintetiza las técnicas de implementación de los MMV y los modelos de vecindad en ecología terrestre, resumiendo su uso hasta la fecha y destacando nuevas líneas de aplicación.
Resumo:
The strength and geometry of the Atlantic meridional overturning circulation is tightly coupled to climate on glacial-interglacial and millennial timescales, but has proved difficult to reconstruct, particularly for the Last Glacial Maximum. Today, the return flow from the northern North Atlantic to lower latitudes associated with the Atlantic meridional overturning circulation reaches down to approximately 4,000 m. In contrast, during the Last Glacial Maximum this return flow is thought to have occurred primarily at shallower depths. Measurements of sedimentary 231Pa/230Th have been used to reconstruct the strength of circulation in the North Atlantic Ocean, but the effects of biogenic silica on 231Pa/230Th-based estimates remain controversial. Here we use measurements of 231Pa/230Th ratios and biogenic silica in Holocene-aged Atlantic sediments and simulations with a two-dimensional scavenging model to demonstrate that the geometry and strength of the Atlantic meridional overturning circulation are the primary controls of 231Pa/230Th ratios in modern Atlantic sediments. For the glacial maximum, a simulation of Atlantic overturning with a shallow, but vigorous circulation and bulk water transport at around 2,000 m depth best matched observed glacial Atlantic 231Pa/230Th values. We estimate that the transport of intermediate water during the Last Glacial Maximum was at least as strong as deep water transport today.
Resumo:
The aim of this report is to describe the use of WinBUGS for two datasets that arise from typical population pharmacokinetic studies. The first dataset relates to gentamicin concentration-time data that arose as part of routine clinical care of 55 neonates. The second dataset incorporated data from 96 patients receiving enoxaparin. Both datasets were originally analyzed by using NONMEM. In the first instance, although NONMEM provided reasonable estimates of the fixed effects parameters it was unable to provide satisfactory estimates of the between-subject variance. In the second instance, the use of NONMEM resulted in the development of a successful model, albeit with limited available information on the between-subject variability of the pharmacokinetic parameters. WinBUGS was used to develop a model for both of these datasets. Model comparison for the enoxaparin dataset was performed by using the posterior distribution of the log-likelihood and a posterior predictive check. The use of WinBUGS supported the same structural models tried in NONMEM. For the gentamicin dataset a one-compartment model with intravenous infusion was developed, and the population parameters including the full between-subject variance-covariance matrix were available. Analysis of the enoxaparin dataset supported a two compartment model as superior to the one-compartment model, based on the posterior predictive check. Again, the full between-subject variance-covariance matrix parameters were available. Fully Bayesian approaches using MCMC methods, via WinBUGS, can offer added value for analysis of population pharmacokinetic data.
Resumo:
Relational reasoning, or the ability to identify meaningful patterns within any stream of information, is a fundamental cognitive ability associated with academic success across a variety of domains of learning and levels of schooling. However, the measurement of this construct has been historically problematic. For example, while the construct is typically described as multidimensional—including the identification of multiple types of higher-order patterns—it is most often measured in terms of a single type of pattern: analogy. For that reason, the Test of Relational Reasoning (TORR) was conceived and developed to include three other types of patterns that appear to be meaningful in the educational context: anomaly, antinomy, and antithesis. Moreover, as a way to focus on fluid relational reasoning ability, the TORR was developed to include, except for the directions, entirely visuo-spatial stimuli, which were designed to be as novel as possible for the participant. By focusing on fluid intellectual processing, the TORR was also developed to be fairly administered to undergraduate students—regardless of the particular gender, language, and ethnic groups they belong to. However, although some psychometric investigations of the TORR have been conducted, its actual fairness across those demographic groups has yet to be empirically demonstrated. Therefore, a systematic investigation of differential-item-functioning (DIF) across demographic groups on TORR items was conducted. A large (N = 1,379) sample, representative of the University of Maryland on key demographic variables, was collected, and the resulting data was analyzed using a multi-group, multidimensional item-response theory model comparison procedure. Using this procedure, no significant DIF was found on any of the TORR items across any of the demographic groups of interest. This null finding is interpreted as evidence of the cultural-fairness of the TORR, and potential test-development choices that may have contributed to that cultural-fairness are discussed. For example, the choice to make the TORR an untimed measure, to use novel stimuli, and to avoid stereotype threat in test administration, may have contributed to its cultural-fairness. Future steps for psychometric research on the TORR, and substantive research utilizing the TORR, are also presented and discussed.