930 resultados para BIASES
Resumo:
Dynamics of biomolecules over various spatial and time scales are essential for biological functions such as molecular recognition, catalysis and signaling. However, reconstruction of biomolecular dynamics from experimental observables requires the determination of a conformational probability distribution. Unfortunately, these distributions cannot be fully constrained by the limited information from experiments, making the problem an ill-posed one in the terminology of Hadamard. The ill-posed nature of the problem comes from the fact that it has no unique solution. Multiple or even an infinite number of solutions may exist. To avoid the ill-posed nature, the problem needs to be regularized by making assumptions, which inevitably introduce biases into the result.
Here, I present two continuous probability density function approaches to solve an important inverse problem called the RDC trigonometric moment problem. By focusing on interdomain orientations we reduced the problem to determination of a distribution on the 3D rotational space from residual dipolar couplings (RDCs). We derived an analytical equation that relates alignment tensors of adjacent domains, which serves as the foundation of the two methods. In the first approach, the ill-posed nature of the problem was avoided by introducing a continuous distribution model, which enjoys a smoothness assumption. To find the optimal solution for the distribution, we also designed an efficient branch-and-bound algorithm that exploits the mathematical structure of the analytical solutions. The algorithm is guaranteed to find the distribution that best satisfies the analytical relationship. We observed good performance of the method when tested under various levels of experimental noise and when applied to two protein systems. The second approach avoids the use of any model by employing maximum entropy principles. This 'model-free' approach delivers the least biased result which presents our state of knowledge. In this approach, the solution is an exponential function of Lagrange multipliers. To determine the multipliers, a convex objective function is constructed. Consequently, the maximum entropy solution can be found easily by gradient descent methods. Both algorithms can be applied to biomolecular RDC data in general, including data from RNA and DNA molecules.
Resumo:
Duarte et al. draw attention to the "embedding of liberal values and methods" in social psychological research. They note how these biases are often invisible to the researchers themselves. The authors themselves fall prey to these "invisible biases" by utilizing the five-factor model of personality and the trait of openness to experience as one possible explanation for the under-representation of political conservatives in social psychology. I show that the manner in which the trait of openness to experience is conceptualized and measured is a particularly blatant example of the very liberal bias the authors decry.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
An abundance of research in the social sciences has demonstrated a persistent bias against nonnative English speakers (Giles & Billings, 2004; Gluszek & Dovidio, 2010). Yet, organizational scholars have only begun to investigate the underlying mechanisms that drive the bias against nonnative speakers and subsequently design interventions to mitigate these biases. In this dissertation, I offer an integrative model to organize past explanations for accent-based bias into a coherent framework, and posit that nonnative accents elicit social perceptions that have implications at the personal, relational, and group level. I also seek to complement the existing emphasis on main effects of accents, which focuses on the general tendency to discriminate against those with accents, by examining moderators that shed light on the conditions under which accent-based bias is most likely to occur. Specifically, I explore the idea that people’s beliefs about the controllability of accents can moderate their evaluations toward nonnative speakers, such that those who believe that accents can be controlled are more likely to demonstrate a bias against nonnative speakers. I empirically test my theoretical model in three studies in the context of entrepreneurial funding decisions. Results generally supported the proposed model. By examining the micro foundations of accent-based bias, the ideas explored in this dissertation set the stage for future research in an increasingly multilingual world.
Resumo:
High throughput next generation sequencing, together with advanced molecular methods, has considerably enhanced the field of food microbiology. By overcoming biases associated with culture dependant approaches, it has become possible to achieve novel insights into the nature of food-borne microbial communities. In this thesis, several different sequencing-based approaches were applied with a view to better understanding microbe associated quality defects in cheese. Initially, a literature review provides an overview of microbe-associated cheese quality defects as well as molecular methods for profiling complex microbial communities. Following this, 16S rRNA sequencing revealed temporal and spatial differences in microbial composition due to the time during the production day that specific commercial cheeses were manufactured. A novel Ion PGM sequencing approach, focusing on decarboxylase genes rather than 16S rRNA genes, was then successfully employed to profile the biogenic amine producing cohort of a series of artisanal cheeses. Investigations into the phenomenon of cheese pinking formed the basis of a joint 16S rRNA and whole genome shotgun sequencing approach, leading to the identification of Thermus species and, more specifically, the pathway involved in production of lycopene, a red coloured carotenoid. Finally, using a more traditional approach, the effect of addition of a facultatively heterofermentative Lactobacillus (Lactobacillus casei) to a Swiss-type cheese, in which starter activity was compromised, was investigated from the perspective of its ability to promote gas defects and irregular eye formation. X-ray computed tomography was used to visualise, using a non-destructive method, the consequences of the undesirable gas formation that resulted. Ultimately this thesis has demonstrated that the application of molecular techniques, such as next generation sequencing, can provide a detailed insight into defect-causing microbial populations present and thereby may underpin approaches to optimise the quality and consistency of a wide variety of cheeses.
Resumo:
HURDAT is the main historical archive of all tropical storms and hurricanes in the North Atlantic Basin, which includes the Caribbean Sea and Gulf of Mexico, from 1851 to the present. HURDAT is maintained and updated annually by the National Hurricane Center at Miami, Florida. Today, HURDAT is widely used by research scientists, operational hurricane forecasters, insurance companies, emergency managers and others. HURDAT contains both systematic biases and random errors. Thus, the reanalysis of HURDAT is vital. For this thesis, HURDAT is reanalyzed for the period of 1954-1963. The track and intensity of each existing tropical cyclone in HURDAT is assessed in the light of 21st century understanding and previously unrecognized tropical cyclones are detected and analyzed. The resulting changes will be recommended to the National Hurricane Center Best Track Change Committee for inclusion in HURDAT.
Resumo:
Background: Mothers with HIV often face personal and environmental risks for poor maternal health behaviors and infant neglect, even when HIV transmission to the infant was prevented. Maternal-fetal attachment (MFA), the pre-birth relationship of a woman with her fetus, may be the precursor to maternal caregiving. Using the strengths perspective in social work, which embeds MFA within a socio-ecological conceptual framework, it is hypothesized that high levels of maternal-fetal attachment may protect mothers and infants against poor maternal health behaviors. Objective: To assess whether MFA together with history of substance use, living marital status, planned pregnancy status, and timing of HIV diagnosis predict three desirable maternal health behaviors (pregnancy care, adherence to prenatal antiretroviral therapy–ART, and infant’s screening clinic care) among pregnant women with HIV/AIDS. Method: Prospective observation and hypothesis-testing multivariate analyses. Over 17 consecutive months, all eligible English- or Spanish-speaking pregnant women with HIV ( n = 110) were approached in the principal obstetric and screening clinics in Miami-Dade County, Florida at 24 weeks’ gestation; 82 agreed to enroll. During three data collection periods from enrollment until 16 weeks after childbirth (range: 16 to 32 weeks), participants reported on socio-demographic and predictor variables, MFA, and pregnancy care. Measures of adherence to ART and infant care were extracted from medical records. Findings: Sociodemographic, pregnancy, and HIV disease characteristics in this sample suggest changes in the makeup of HIV-infected pregnant women parallel to the evolution of the HIV epidemic in the USA over the past two decades. The MFA model predicted maternal health behaviors for pregnancy care (R2 = .37), with MFA, marital living status, and planned pregnancy status independently contributing ( = .50, = .28, = .23, respectively). It did not predict adherence to ART medication or infant care. Relevance: These findings provide the first focused evidence of the protective role of MFA against poor maternal health behaviors among pregnant women with HIV, in the presence of adverse life circumstances. Social desirability biases in some self-report measures may limit the findings. Suggestions are made for orienting future inquiry on maternal health behaviors during childbirth toward relationship and protection.
Resumo:
We calculate net community production (NCP) during summer 2005-2006 and spring 2006 in the Ross Sea using multiple approaches to determine the magnitude and consistency of rates. Water column carbon and nutrient inventories and surface ocean O2/Ar data are compared to satellite-derived primary productivity (PP) estimates and 14C uptake experiments. In spring, NCP was related to stratification proximal to upper ocean fronts. In summer, the most intense C drawdown was in shallow mixed layers affected by ice melt; depth-integrated C drawdown, however, increased with mixing depth. Delta O2/Ar-based methods, relying on gas exchange reconstructions, underestimate NCP due to seasonal variations in surface Delta O2/Ar and NCP rates. Mixed layer Delta O2/Ar requires approximately 60 days to reach steady state, starting from early spring. Additionally, cold temperatures prolong the sensitivity of gas exchange reconstructions to past NCP variability. Complex vertical structure, in addition to the seasonal cycle, affects interpretations of surface-based observations, including those made from satellites. During both spring and summer, substantial fractions of NCP were below the mixed layer. Satellite-derived estimates tended to overestimate PP relative to 14C-based estimates, most severely in locations of stronger upper water column stratification. Biases notwithstanding, NCP-PP comparisons indicated that community respiration was of similar magnitude to NCP. We observed that a substantial portion of NCP remained as suspended particulate matter in the upper water column, demonstrating a lag between production and export. Resolving the dynamic physical processes that structure variance in NCP and its fate will enhance the understanding of the carbon cycling in highly productive Antarctic environments.
Resumo:
The TEX86H temperature proxy is a relatively new proxy based on crenarchaeotal lipids and has rarely been applied together with other temperature proxies. In this study, we applied the TEX86H on a sediment core from the Alboran Sea (western Mediterranean, core ODP-977A) covering the penultimate climate cycle, that is, from 244 to 130 ka, and compared this with previously published sea surface temperatures derived from the Uk'37 of alkenones of haptophyta and Mg/Ca records of planktonic foraminifera. The TEX86H temperature record shows remarkably similar stadial-interstadial patterns and abrupt temperature changes to those observed with the Uk'37 palaeothermometer. Absolute TEX86H temperature estimates are generally higher than those of Uk'37, though this difference (<3°C in 81% of the data points) is mainly within the temperature calibration error for both proxies, suggesting that crenarchaeota and haptophyta experienced similar temperature variations. During occasional events (<5% of the analyzed time span), however, the TEX86H exhibits considerably higher absolute temperature estimates than the Uk'37. Comparison with Mg/Ca records of planktonic foraminifera as well as other Mediterranean TEX86 and Uk'37 records suggests that part of this divergence may be attributed to seasonal differences, that is, with TEX86H reflecting mainly the warm summer season while Uk'37 would show annual mean. Biases in the global calibration of both proxies or specific biases in the Mediterranean are an alternative, though less likely, explanation. Despite differences between absolute TEX86H and Uk'37 temperatures, the correlation between the two proxies (r**2 = 0.59, 95% significance) provides support for the occurrence of abrupt temperature variations in the western Mediterranean during the penultimate interglacial-to-glacial cycle.
Resumo:
The domestication of plants and animals marks one of the most significant transitions in human, and indeed global, history. Traditionally, study of the domestication process was the exclusive domain of archaeologists and agricultural scientists; today it is an increasingly multidisciplinary enterprise that has come to involve the skills of evolutionary biologists and geneticists. Although the application of new information sources and methodologies has dramatically transformed our ability to study and understand domestication, it has also generated increasingly large and complex datasets, the interpretation of which is not straightforward. In particular, challenges of equifinality, evolutionary variance, and emergence of unexpected or counter-intuitive patterns all face researchers attempting to infer past processes directly from patterns in data. We argue that explicit modeling approaches, drawing upon emerging methodologies in statistics and population genetics, provide a powerful means of addressing these limitations. Modeling also offers an approach to analyzing datasets that avoids conclusions steered by implicit biases, and makes possible the formal integration of different data types. Here we outline some of the modeling approaches most relevant to current problems in domestication research, and demonstrate the ways in which simulation modeling is beginning to reshape our understanding of the domestication process.
Resumo:
To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.
Resumo:
To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Activation triggers the exchange of subunits in Ca(2+)/calmodulin-dependent protein kinase II (CaMKII), an oligomeric enzyme that is critical for learning, memory, and cardiac function. The mechanism by which subunit exchange occurs remains elusive. We show that the human CaMKII holoenzyme exists in dodecameric and tetradecameric forms, and that the calmodulin (CaM)-binding element of CaMKII can bind to the hub of the holoenzyme and destabilize it to release dimers. The structures of CaMKII from two distantly diverged organisms suggest that the CaM-binding element of activated CaMKII acts as a wedge by docking at intersubunit interfaces in the hub. This converts the hub into a spiral form that can release or gain CaMKII dimers. Our data reveal a three-way competition for the CaM-binding element, whereby phosphorylation biases it towards the hub interface, away from the kinase domain and calmodulin, thus unlocking the ability of activated CaMKII holoenzymes to exchange dimers with unactivated ones.