36 resultados para Input-output Tables
Resumo:
Measurement of arterial input function is a restrictive aspect for quantitative (18)F-FDG PET studies in rodents because of their small total blood volume and the related difficulties in withdrawing blood.
Resumo:
Escherichia coli-based bioreporters for arsenic detection are typically based on the natural feedback loop that controls ars operon transcription. Feedback loops are known to show a wide range linear response to the detriment of the overall amplification of the incoming signal. While being a favourable feature in controlling arsenic detoxification for the cell, a feedback loop is not necessarily the most optimal for obtaining highest sensitivity and response in a designed cellular reporter for arsenic detection. Here we systematically explore the effects of uncoupling the topology of arsenic sensing circuitry on the developed reporter signal as a function of arsenite concentration input. A model was developed to describe relative ArsR and GFP levels in feedback and uncoupled circuitry, which was used to explore new ArsR-based synthetic circuits. The expression of arsR was then placed under the control of a series of constitutive promoters, which differed in promoter strength, and which could be further modulated by TetR repression. Expression of the reporter gene was maintained under the ArsR-controlled Pars promoter. ArsR expression in the systems was measured by using ArsR-mCherry fusion proteins. We find that stronger constitutive ArsR production decreases arsenite-dependent EGFP output from Pars and vice versa. This leads to a tunable series of arsenite-dependent EGFP outputs in a variety of systematically characterized circuitries. The higher expression levels and sensitivities of the response curves in the uncoupled circuits may be useful for improving field-test assays using arsenic bioreporters.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
BACKGROUND: Knowledge of normal heart weight ranges is important information for pathologists. Comparing the measured heart weight to reference values is one of the key elements used to determine if the heart is pathological, as heart weight increases in many cardiac pathologies. The current reference tables are old and in need of an update. AIMS: The purposes of this study are to establish new reference tables for normal heart weights in the local population and to determine the best predictive factor for normal heart weight. We also aim to provide technical support to calculate the predictive normal heart weight. METHODS: The reference values are based on retrospective analysis of adult Caucasian autopsy cases without any obvious pathology that were collected at the University Centre of Legal Medicine in Lausanne from 2007 to 2011. We selected 288 cases. The mean age was 39.2 years. There were 118 men and 170 women. Regression analyses were performed to assess the relationship of heart weight to body weight, body height, body mass index (BMI) and body surface area (BSA). RESULTS: The heart weight increased along with an increase in all the parameters studied. The mean heart weight was greater in men than in women at a similar body weight. BSA was determined to be the best predictor for normal heart weight. New reference tables for predicted heart weights are presented as a web application that enable the comparison of heart weights observed at autopsy with the reference values. CONCLUSIONS: The reference tables for heart weight and other organs should be systematically updated and adapted for the local population. Web access and smartphone applications for the predicted heart weight represent important investigational tools.
Resumo:
This contribution introduces Data Envelopment Analysis (DEA), a performance measurement technique. DEA helps decision makers for the following reasons: (1) By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement; (2) By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient; (3) By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimise the average total cost; (4) By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices. This contribution presents the essentials about DEA, alongside a case study to intuitively understand its application. It also introduces Win4DEAP, a software package that conducts efficiency analysis based on DEA methodology. The methodical background of DEA is presented for more demanding readers. Finally, four advanced topics of DEA are treated: adjustment to the environment, preferences, sensitivity analysis and time series data.
Resumo:
Some patients infected with human immunodeficiency virus (HIV) who are experiencing antiretroviral treatment failure have persistent improvement in CD4+ T cell counts despite high plasma viremia. To explore the mechanisms responsible for this phenomenon, 2 parameters influencing the dynamics of CD4+ T cells were evaluated: death of mature CD4+ T cells and replenishment of the CD4+ T cell pool by the thymus. The improvement in CD4+ T cells observed in patients with treatment failure was not correlated with spontaneous, Fas ligand-induced, or activation-induced T cell death. In contrast, a significant correlation between the improvement in CD4+ T cell counts and thymic output, as assessed by measurement of T cell receptor excision circles, was observed. These observations suggest that increased thymic output contributes to the dissociation between CD4+ T cell counts and viremia in patients failing antiretroviral therapy and support a model in which drug-resistant HIV strains may have reduced replication rates and pathogenicity in the thymus.
Resumo:
Neural comparisons of bilateral sensory inputs are essential for visual depth perception and accurate localization of sounds in space. All animals, from single-cell prokaryotes to humans, orient themselves in response to environmental chemical stimuli, but the contribution of spatial integration of neural activity in olfaction remains unclear. We investigated this problem in Drosophila melanogaster larvae. Using high-resolution behavioral analysis, we studied the chemotaxis behavior of larvae with a single functional olfactory neuron on either the left or right side of the head, allowing us to examine unilateral or bilateral olfactory input. We developed new spectroscopic methods to create stable odorant gradients in which odor concentrations were experimentally measured. In these controlled environments, we observed that a single functional neuron provided sufficient information to permit larval chemotaxis. We found additional evidence that the overall accuracy of navigation is enhanced by the increase in the signal-to-noise ratio conferred by bilateral sensory input.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Isotopic analyses on bulk carbonates are considered a useful tool for palaeoclimatic reconstruction assuming calcite precipitation occurring at oxygen isotope equilibrium with local water and detrital carbonate input being absent or insignificant. We present results from Lake Neuchatel (western Switzerland) that demonstrate equilibrium precipitation of calcite, except during high productivity periods, and the presence of detrital and resuspended calcite. Mineralogy, geochemistry and stable isotope values of Lake Neuchatel trap sediments and adjacent rivers suspension were studied. Mineralogy of suspended matter in the major inflowing rivers documents an important contribution of detrital carbonates, predominantly calcite with minor amounts of dolomite and ankerite. Using mineralogical data, the quantity of allochthonous calcite can be estimated by comparing the ratio ankerite + dolomite/calcite + ankerite + dolomite in the inflowing rivers and in the traps. Material taken from sediment traps shows an evolution from practically pure endogenic calcite in summer (10-20% detrital material) to higher percentages of detrital material in winter (up to 20-40%). Reflecting these mineralogical variations, delta(13)C and delta(18)O values of calcite from sediment traps are more negative in summer than in winter times. Since no significant variations in isotopic composition of lake water were detected over one year, factors controlling oxygen isotopic composition of calcite in sediment traps are the precipitation temperature, and the percentage of resuspended and detrital calcite. Samples taken close to the river inflow generally have higher delta values than the others, confirming detrital influence. SEM and isotopic studies on different size fractions (<2, 2-6, 6-20, 20-60, >60 mu m) of winter and summer samples allowed the recognition of resuspension and to separate new endogenic calcite from detrital calcite. Fractions >60 and (2 mu m have the highest percentage of detritus, Fractions 2-6 and 6-20 mu m are typical for the new endogenic calcite in summer, as given by calculations assuming isotopic equilibrium with local water. In winter such fractions show similar values than in summer, indicating resuspension. Using the isotopic composition of sediment traps material and of different size fractions, as well as the isotopic composition of lake water, the water temperature measurements and mineralogy, we re-evaluated the bulk carbonate potential for palaeoclimatic reconstruction in the presence of detrital and re-suspended calcite. This re-evaluation leads to the following conclusion: (1) the endogenic signal can be amplified by applying a particle-size separation, once the size of endogenic calcite is known from SEM study; (2) resuspended calcite does not alter the endogenic signal, but it lowers the time resolution; (3) detrital input decreases at increasing distances from the source, and it modifies the isotopic signal only when very abundant; (4) influence of detrital calcite on bulk sediment isotopic composition can be calculated. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
Background: Although there has been an abundant literature in recent years about farmer's lung disease, few addressed occupational measures able to maintain the farmer in his work. Nevertheless we know now that most of the farmers can be kept at the workplace by the way of occupational preventive measures. Methods: This matter is discussed from a case report. A farmer affected by the farmer's lung disease was sent to us by his pneumologist, in order to estimate the possibility of maintaining him in his job and to determine relevant changes at his workplace to minimize risk of exposure to dust antigen. This approach required a visit to the workplace by occupational physician and hygienist. Results: The visit of the workplace pointed out different habits and architectural particularities which were potential sources of exposure. The two main proposed measures to reduce the risk, were to wear respiratory masks while working inside the barn, such as preparing hay, feeding the cattle or sweeping the floor, and to build a direct access from the bathroom (shower and toilet) to the outside, allowing to go out of the barn after taking a shower and changing, without risk of being contaminated again. Although upgrading the shower-toilet is not yet completed to date, the already performed modifications led currently to significant clinical improvements, despite the risk of exposure was high since the animals were in the barn for more than two months. Conclusion: The treatment of the farmer's lung disease must be multidisciplinary involving general practitioner, pneumologist, occupational hygienist and occupational physician.