881 resultados para N-based linear spacers


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein α-helical coiled coil structures that elicit antibody responses, which block critical functions of medically important microorganisms, represent a means for vaccine development. By using bioinformatics algorithms, a total of 50 antigens with α-helical coiled coil motifs orthologous to Plasmodium falciparum were identified in the P. vivax genome. The peptides identified in silico were chemically synthesized; circular dichroism studies indicated partial or high α-helical content. Antigenicity was evaluated using human sera samples from malaria-endemic areas of Colombia and Papua New Guinea. Eight of these fragments were selected and used to assess immunogenicity in BALB/c mice. ELISA assays indicated strong reactivity of serum samples from individuals residing in malaria-endemic regions and sera of immunized mice, with the α-helical coiled coil structures. In addition, ex vivo production of IFN-γ by murine mononuclear cells confirmed the immunogenicity of these structures and the presence of T-cell epitopes in the peptide sequences. Moreover, sera of mice immunized with four of the eight antigens recognized native proteins on blood-stage P. vivax parasites, and antigenic cross-reactivity with three of the peptides was observed when reacted with both the P. falciparum orthologous fragments and whole parasites. Results here point to the α-helical coiled coil peptides as possible P. vivax malaria vaccine candidates as were observed for P. falciparum. Fragments selected here warrant further study in humans and non-human primate models to assess their protective efficacy as single components or assembled as hybrid linear epitopes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: There is growing evidence that traffic-related air pollution reduces birth weight. Improving exposure assessment is a key issue to advance in this research area.Objective: We investigated the effect of prenatal exposure to traffic-related air pollution via geographic information system (GIS) models on birth weight in 570 newborns from the INMA (Environment and Childhood) Sabadell cohort.Methods: We estimated pregnancy and trimester-specific exposures to nitrogen dioxide and aromatic hydrocarbons [benzene, toluene, ethylbenzene, m/p-xylene, and o-xylene (BTEX)] by using temporally adjusted land-use regression (LUR) models. We built models for NO2 and BTEX using four and three 1-week measurement campaigns, respectively, at 57 locations. We assessed the relationship between prenatal air pollution exposure and birth weight with linear regression models. We performed sensitivity analyses considering time spent at home and time spent in nonresidential outdoor environments during pregnancy.Results: In the overall cohort, neither NO2 nor BTEX exposure was significantly associated with birth weight in any of the exposure periods. When considering only women who spent < 2 hr/day in nonresidential outdoor environments, the estimated reductions in birth weight associated with an interquartile range increase in BTEX exposure levels were 77 g [95% confidence interval (CI), 7–146 g] and 102 g (95% CI, 28–176 g) for exposures during the whole pregnancy and the second trimester, respectively. The effects of NO2 exposure were less clear in this subset.Conclusions: The association of BTEX with reduced birth weight underscores the negative role of vehicle exhaust pollutants in reproductive health. Time–activity patterns during pregnancy complement GIS-based models in exposure assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. METHODS: We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship 'Prevalence = Incidence x Duration' in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship 'incident = true incident + false incident' and also to the IIR derived from the BED incidence assay. RESULTS: Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R(2) = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. CONCLUSIONS: IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Urinary creatinine excretion is used as a marker of completeness of timed urine collections, which are a keystone of several metabolic evaluations in clinical investigations and epidemiological surveys. METHODS: We used data from two independent Swiss cross-sectional population-based studies with standardised 24-hour urinary collection and measured anthropometric variables. Only data from adults of European descent, with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m2 and reported completeness of the urinary collection were retained. A linear regression model was developed to predict centiles of the 24-hour urinary creatinine excretion in 1,137 participants from the Swiss Survey on Salt and validated in 994 participants from the Swiss Kidney Project on Genes in Hypertension. RESULTS: The mean urinary creatinine excretion was 193 ± 41 μmol/kg/24 hours in men and 151 ± 38 μmol/kg/24 hours in women in the Swiss Survey on Salt. The values were inversely correlated with age and body mass index (BMI). CONCLUSIONS: We propose a validated prediction equation for 24-hour urinary creatinine excretion in the general European population, based on readily available variables such as age, sex and BMI, and a few derived normograms to ease its clinical application. This should help healthcare providers to interpret the completeness of a 24-hour urine collection in daily clinical practice and in epidemiological population studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. Longevity has been attributed to decreased cardiovascular mortality. Subjects with long-lived parents may represent a valuable group to study cardiovascular risk factors (CVRF) associated with longevity, possibly leading to new ways of preventing cardiovascular disease. Purpose: Longevity has been attributed to decreased cardiovascular mortality. Subjects with long-lived parents may represent a valuable group to study cardiovascular risk factors (CVRF) associated with longevity, possibly leading to new ways of preventing cardiovascular disease. Methods: We analyzed data from a population-based sample of 2561 participants (1163 men and 1398 women) aged 55--75 years from the city of Lausanne, Switzerland (CoLaus study). Participants were stratified by the number of parents (0, 1, 2) who survived to 85 years or more. Trend across these strata was assessed using a non-parametric kmean test. The associations of parental age (independent covariate used as a proxy for longevity) with fasting blood glucose, blood pressures, blood lipids, body mass index (BMI), weight, height or liver enzymes (continuous dependent variables) were analyzed using multiple linear regressions. Models were adjusted for age, sex, alcohol consumption, smoking and educational level, and BMI for liver enzymes. Results: For subjects with 0 (N=1298), 1 (N=991) and 2 (N=272) long-lived parents, median BMI (interquartile range) was 25.4 (6.5), 24.9 (6.1) and 23.7 (4.8) kg/m2 in women (P<0.001), and 27.3 (4.8), 27.0 (4.5) and 25.9 (4.9) kg/m2 in men (P=0.04), respectively; median weight was 66.5 (16.1), 65.0 (16.4) and 63.4 (13.7) kg in women (P=0.003), and 81.5 (17.0), 81.4 (16.4) and 80.3 (17.1) kg in men (P=0.36). Median height was 161 (8), 162 (9) and 163 (8) cm in women (P=0.005), and 173 (9), 174 (9) and 174 (11) cm in men (P=0.09). The corresponding medians for AST (Aspartate Aminotransferase) were 31 (13), 29 (11) and 28 (10) U/L (P=0.002), and 28 (17), 27 (14) and 26 (19) U/L for ALT (Alanin Aminotransferase, P=0.053) in men. In multivariable analyses, greater parental longevity was associated with lower BMI, lower weight and taller stature in women (P<0.01) and lower AST in men (P=0.011). No significant associations were observed for the other variables analyzed. Sensitivity analyses restricted to subjects whose parents were dead (N=1844) led to similar results, with even stronger associations of parental longevity with liver enzymes in men. Conclusion: In women, increased parental longevity was associated with smaller BMI, attributable to lower weight and taller stature. In men, the association of increased parental longevity with lower liver enzymes, independently of BMI, suggests that parental longevity may be associated with decreased nonalcoholic fatty liver disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from as if linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of lens model research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human andheuristic performance in the same tasks. Our results highlight the trade-off betweenlinear models and heuristics. Whereas the former are cognitively demanding, the latterare simple to use. However, they require knowledge and thus maps of when andwhich heuristic to employ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth¿s surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Limited information exists regarding the association between serum uric acid (SUA) and psychiatric disorders. We explored the relationship between SUA and subtypes of major depressive disorder (MDD) and specific anxiety disorders. Additionally, we examined the association of SLC2A9 rs6855911 variant with anxiety disorders. METHODS: We conducted a cross-sectional analysis on 3,716 individuals aged 35-66 years previously selected for the population-based CoLaus survey and who agreed to undergo further psychiatric evaluation. SUA was measured using uricase-PAP method. The French translation of the semi-structured Diagnostic Interview for Genetic Studies was used to establish lifetime and current diagnoses of depression and anxiety disorders according to the DSM-IV criteria. RESULTS: Men reported significantly higher levels of SUA compared to women (357±74 µmol/L vs. 263±64 µmol/L). The prevalence of lifetime and current MDD was 44% and 18% respectively while the corresponding estimates for any anxiety disorders were 18% and 10% respectively. A quadratic hockey-stick shaped curve explained the relationship between SUA and social phobia better than a linear trend. However, with regards to the other specific anxiety disorders and other subtypes of MDD, there was no consistent pattern of association. Further analyses using SLC2A9 rs6855911 variant, known to be strongly associated with SUA, supported the quadratic relationship observed between SUA phenotype and social phobia. CONCLUSIONS: A quadratic relationship between SUA and social phobia was observed consistent with a protective effect of moderately elevated SUA on social phobia, which disappears at higher concentrations. Further studies are needed to confirm our observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Several studies have reported increased levels of inflammatory biomarkers in chronic kidney disease (CKD), but data from the general population are sparse. In this study, we assessed levels of the inflammatory markers C-reactive protein (hsCRP), tumor necrosis factor α (TNF-α), interleukin (IL)-1β and IL-6 across all ranges of renal function. METHODS: We conducted a cross-sectional study in a random sample of 6,184 Caucasian subjects aged 35-75 years in Lausanne, Switzerland. Serum levels of hsCRP, TNF-α, IL-6, and IL-1β were measured in 6,067 participants (98.1%); serum creatinine-based estimated glomerular filtration rate (eGFR(creat), CKD-EPI formula) was used to assess renal function, and albumin/creatinine ratio on spot morning urine to assess microalbuminuria (MAU). RESULTS: Higher serum levels of IL-6, TNF-α and hsCRP and lower levels of IL-1β were associated with a lower renal function, CKD (eGFR(creat) <60 ml/min/1.73 m(2); n = 283), and MAU (n = 583). In multivariate linear regression analysis adjusted for age, sex, hypertension, smoking, diabetes, body mass index, lipids, antihypertensive and hypolipemic therapy, only log-transformed TNF-α remained independently associated with lower renal function (β -0.54 ±0.19). In multivariate logistic regression analysis, higher TNF-α levels were associated with CKD (OR 1.17; 95% CI 1.01-1.35), whereas higher levels of IL-6 (OR 1.09; 95% CI 1.02-1.16) and hsCRP (OR 1.21; 95% CI 1.10-1.32) were associated with MAU. CONCLUSION: We did not confirm a significant association between renal function and IL-6, IL-1β and hsCRP in the general population. However, our results demonstrate a significant association between TNF-α and renal function, suggesting a potential link between inflammation and the development of CKD. These data also confirm the association between MAU and inflammation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.