917 resultados para Sum of logistics


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Marker ordering during linkage map construction is a critical component of QTL mapping research. In recent years, high-throughput genotyping methods have become widely used, and these methods may generate hundreds of markers for a single mapping population. This poses problems for linkage analysis software because the number of possible marker orders increases exponentially as the number of markers increases. In this paper, we tested the accuracy of linkage analyses on simulated recombinant inbred line data using the commonly used Map Manager QTX (Manly et al. 2001: Mammalian Genome 12, 930-932) software and RECORD (Van Os et al. 2005: Theoretical and Applied Genetics 112, 30-40). Accuracy was measured by calculating two scores: % correct marker positions, and a novel, weighted rank-based score derived from the sum of absolute values of true minus observed marker ranks divided by the total number of markers. The accuracy of maps generated using Map Manager QTX was considerably lower than those generated using RECORD. Differences in linkage maps were often observed when marker ordering was performed several times using the identical dataset. In order to test the effect of reducing marker numbers on the stability of marker order, we pruned marker datasets focusing on regions consisting of tightly linked clusters of markers, which included redundant markers. Marker pruning improved the accuracy and stability of linkage maps because a single unambiguous marker order was produced that was consistent across replications of analysis. Marker pruning was also applied to a real barley mapping population and QTL analysis was performed using different map versions produced by the different programs. While some QTLs were identified with both map versions, there were large differences in QTL mapping results. Differences included maximum LOD and R-2 values at QTL peaks and map positions, thus highlighting the importance of marker order for QTL mapping

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The relationship between major depressive disorder (MDD) and bipolar disorder (BD) remains controversial. Previous research has reported differences and similarities in risk factors for MDD and BD, such as predisposing personality traits. For example, high neuroticism is related to both disorders, whereas openness to experience is specific for BD. This study examined the genetic association between personality and MDD and BD by applying polygenic scores for neuroticism, extraversion, openness to experience, agreeableness and conscientiousness to both disorders. Polygenic scores reflect the weighted sum of multiple single-nucleotide polymorphism alleles associated with the trait for an individual and were based on a meta-analysis of genome-wide association studies for personality traits including 13,835 subjects. Polygenic scores were tested for MDD in the combined Genetic Association Information Network (GAIN-MDD) and MDD2000+ samples (N=8921) and for BD in the combined Systematic Treatment Enhancement Program for Bipolar Disorder and Wellcome Trust Case-Control Consortium samples (N=6329) using logistic regression analyses. At the phenotypic level, personality dimensions were associated with MDD and BD. Polygenic neuroticism scores were significantly positively associated with MDD, whereas polygenic extraversion scores were significantly positively associated with BD. The explained variance of MDD and BD, approximately 0.1%, was highly comparable to the variance explained by the polygenic personality scores in the corresponding personality traits themselves (between 0.1 and 0.4%). This indicates that the proportions of variance explained in mood disorders are at the upper limit of what could have been expected. This study suggests shared genetic risk factors for neuroticism and MDD on the one hand and for extraversion and BD on the other.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Disorders resulting from degenerative changes in the nervous system are progressive and incurable. Both environmental and inherited factors affect neuron function, and neurodegenerative diseases are often the sum of both factors. The cellular events leading to neuronal death are still mostly unknown. Monogenic diseases can offer a model for studying the mechanisms of neurodegeneration. Neuronal ceroid lipofuscinoses, or NCLs, are a group of monogenic, recessively inherited diseases affecting mostly children. NCLs cause severe and specific loss of neurons in the central nervous system, resulting in the deterioration of motor and mental skills and leading to premature death. In this thesis, the focus has been on two forms of NCL, the infantile NCL (INCL, CLN1) and the Finnish variant of late infantile NCL (vLINCLFin, CLN5). INCL is caused by mutations in the CLN1 gene encoding for the PPT1 (palmitoyl protein thioesterase 1) enzyme. PPT1 removes a palmitate moiety from proteins in experimental conditions, but its substrates in vivo are not known. In the Finnish variant of late infantile NCL (vLINCLFin), the CLN5 gene is defective, but the function of the encoded CLN5 has remained unknown. The aim of this thesis was to elucidate the disease mechanisms of these two NCL diseases by focusing on the molecular interactions of the defective proteins. In this work, the first interaction partner for PPT1, the mitochondrial F1-ATP synthase, was described. This protein has been linked to HDL metabolism in addition to its well-known role in the mitochondrial energy production. The connection between PPT1 and the F1-ATP synthase was studied utilizing the INCL-disease model, the genetically modified Ppt1-deficient mice. The levels of F1-ATP synthase subunits were increased on the surface of Ppt1-deficient neurons when compared to controls. We also detected several changes in lipid metabolism both at the cellular and systemic levels in Ppt1-deficient mice when compared to controls. The interactions between different NCL proteins were also elucidated. We were able to detect novel interactions between CLN5 and other NCL proteins, and to replicate the previously reported interactions. Some of the novel interactions influenced the intracellular trafficking of the proteins. The multiple interactions between CLN5 and other NCL proteins suggest a connection between the NCL subtypes at the cellular level. The main results of this thesis elicit information about the neuronal function of PPT1. The connection between INCL and neuronal lipid metabolism introduces a new perspective to this rather poorly characterized subject. The evidence of the interactions between NCL proteins provides the basis for future research trying to untangle the NCL disease mechanisms and to develop strategies for therapies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Population pharmacokinetic models combined with multiple sets of age– concentration biomonitoring data facilitate back-calculation of chemical uptake rates from biomonitoring data. Objectives We back-calculated uptake rates of PBDEs for the Australian population from multiple biomonitoring surveys (top-down) and compared them with uptake rates calculated from dietary intake estimates of PBDEs and PBDE concentrations in dust (bottom-up). Methods Using three sets of PBDE elimination half-lives, we applied a population pharmacokinetic model to the PBDE biomonitoring data measured between 2002–2003 and 2010–2011 to derive the top-down uptake rates of four key PBDE congeners and six age groups. For the bottom-up approach, we used PBDE concentrations measured around 2005. Results Top-down uptake rates of Σ4BDE (the sum of BDEs 47, 99, 100, and 153) varied from 7.9 to 19 ng/kg/day for toddlers and from 1.2 to 3.0 ng/kg/day for adults; in most cases, they were—for all age groups—higher than the bottom-up uptake rates. The discrepancy was largest for toddlers with factors up to 7–15 depending on the congener. Despite different elimination half-lives of the four congeners, the age–concentration trends showed no increase in concentration with age and were similar for all congeners. Conclusions In the bottom-up approach, PBDE uptake is underestimated; currently known pathways are not sufficient to explain measured PBDE concentrations, especially in young children. Although PBDE exposure of toddlers has declined in the past years, pre- and postnatal exposure to PBDEs has remained almost constant because the mothers’ PBDE body burden has not yet decreased substantially.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Canonical forms for m-valued functions referred to as m-Reed-Muller canonical (m-RMC) forms that are a generalization of RMC forms of two-valued functions are proposed. m-RMC forms are based on the operations ?m (addition mod m) and .m (multiplication mod m) and do not, as in the cases of the generalizations proposed in the literature, require an m-valued function for m not a power of a prime, to be expressed by a canonical form for M-valued functions, where M > m is a power of a prime. Methods of obtaining the m-RMC forms from the truth vector or the sum of products representation of an m-valued function are discussed. Using a generalization of the Boolean difference to m-valued logic, series expansions for m-valued functions are derived.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the optimal allocation of a given amount of foreign aid between two recipient countries. It is shown that, given consumer preferences, a country following a more restrictive trade policy would receive a smaller share of the aid if the donor country maximises its own welfare in allocating aid. If, on the other hand, the donor country allocates aid in order to maximize the sum of the welfare of the two recipient countries, the result is just the opposite. Finally, we analyze the situation where the recipient countries compete with each other for the given amount of aid. It is shown that this competition tends to lower the level of optimal tariffs in the recipient countries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider an optimal power and rate scheduling problem for a multiaccess fading wireless channel with the objective of minimising a weighted sum of mean packet transmission delay subject to a peak power constraint. The base station acts as a controller which, depending upon the buffer lengths and the channel state of each user, allocates transmission rate and power to individual users. We assume perfect channel state information at the transmitter and the receiver. We also assume a Markov model for the fading and packet arrival processes. The policy obtained represents a form of Indexability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The magnetic field of the Earth is 99 % of the internal origin and generated in the outer liquid core by the dynamo principle. In the 19th century, Carl Friedrich Gauss proved that the field can be described by a sum of spherical harmonic terms. Presently, this theory is the basis of e.g. IGRF models (International Geomagnetic Reference Field), which are the most accurate description available for the geomagnetic field. In average, dipole forms 3/4 and non-dipolar terms 1/4 of the instantaneous field, but the temporal mean of the field is assumed to be a pure geocentric axial dipolar field. The validity of this GAD (Geocentric Axial Dipole) hypothesis has been estimated by using several methods. In this work, the testing rests on the frequency dependence of inclination with respect to latitude. Each combination of dipole (GAD), quadrupole (G2) and octupole (G3) produces a distinct inclination distribution. These theoretical distributions have been compared with those calculated from empirical observations from different continents, and last, from the entire globe. Only data from Precambrian rocks (over 542 million years old) has been used in this work. The basic assumption is that during the long-term course of drifting continents, the globe is sampled adequately. There were 2823 observations altogether in the paleomagnetic database of the University of Helsinki. The effect of the quality of observations, as well as the age and rocktype, has been tested. For comparison between theoretical and empirical distributions, chi-square testing has been applied. In addition, spatiotemporal binning has effectively been used to remove the errors caused by multiple observations. The modelling from igneous rock data tells that the average magnetic field of the Earth is best described by a combination of a geocentric dipole and a very weak octupole (less than 10 % of GAD). Filtering and binning gave distributions a more GAD-like appearance, but deviation from GAD increased as a function of the age of rocks. The distribution calculated from so called keypoles, the most reliable determinations, behaves almost like GAD, having a zero quadrupole and an octupole 1 % of GAD. In no earlier study, past-400-Ma rocks have given a result so close to GAD, but low inclinations have been prominent especially in the sedimentary data. Despite these results, a greater deal of high-quality data and a proof of the long-term randomness of the Earth's continental motions are needed to make sure the dipole model holds true.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of detecting an unknown transient signal in noise is considered. The SNR of the observed data is first enhanced using wavelet domain filter The output of the wavelet domain filter is then transformed using a Wigner-Ville transform,which separates the spectrum of the observed signal into narrow frequency bands. Each subband signal at the output of the Wigner-ville block is subjected kto wavelet based level dependent denoising (WBLDD)to supress colored noise A weighted sum of the absolute value of outputs of WBLDD is passed through an energy detector, whose output is used as test statistic to take the final decision. By assigning weights proportional to the energy of the corresponding subband signals, the proposed detector approximates a frequency domain matched filter Simulation results are presented to show that the performance of the proposed detector is better than that of the wavelet packet transform based detector.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A few simple three-atom thermoneutral radical exchange reactions (i.e. A + BC --> AB + C) are examined by ab initio SCF methods. Emphasis is laid on the detailed analysis of density matrices rather than on energetics. Results reveal that the sum of the bond orders of the breaking and forming bonds is not conserved to unity, due to development of free valence on the migrating atom 'B' in the transition state. Bond orders, free valence and spin densities on the atoms are calculated. The present analysis shows that the bond-cleavage process is always more advanced than the bond-formation process in the transition state. Further analysis shows a development of the negative spin density on the migrating atom 'B' in the transition state. The depletion of the alpha-spin density on the radical site "A" in the reactant during the reaction lags behind the growth of the alpha-spin density on the terminal atom "C" of the reactant bond, 'B-C' in the transition state. But all these processes are completed simultaneously at the end of the reaction. Hence, the reactions are asynchronous but kinetically concerted in most cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The unsteady heat transfer associated with flow due to eccentrically rotating disks considered by Ramachandra Rao and Kasiviswanathan (1987) is studied via reformulation in terms of cylindrical polar coordinates. The corresponding exact solution of the energy equation is presented when the upper and lower disks are subjected to steady and unsteady temperatures. For an unsteady flow with nonzero mean, the energy equation can be solved by prescribing the temperature on the disk as a sum of steady and oscillatory parts