923 resultados para Box constrained minimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabajo presenta un Algoritmo Genético (GA) del problema de secuenciar unidades en una línea de producción. Se tiene en cuenta la posibilidad de cambiar la secuencia de piezas mediante estaciones con acceso a un almacén intermedio o centralizado. El acceso al almacén además está restringido, debido al tamaño de las piezas.AbstractThis paper presents a Genetic Algorithm (GA) for the problem of sequencing in a mixed model non-permutation flowshop. Resequencingis permitted where stations have access to intermittent or centralized resequencing buffers. The access to a buffer is restricted by the number of available buffer places and the physical size of the products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yeast successfully adapts to an environmental stress by altering physiology and fine-tuning metabolism. This fine-tuning is achieved through regulation of both gene expression and protein activity, and it is shaped by various physiological requirements. Such requirements impose a sustained evolutionary pressure that ultimately selects a specific gene expression profile, generating a suitable adaptive response to each environmental change. Although some of the requirements are stress specific, it is likely that others are common to various situations. We hypothesize that an evolutionary pressure for minimizing biosynthetic costs might have left signatures in the physicochemical properties of proteins whose gene expression is fine-tuned during adaptive responses. To test this hypothesis we analyze existing yeast transcriptomic data for such responses and investigate how several properties of proteins correlate to changes in gene expression. Our results reveal signatures that are consistent with a selective pressure for economy in protein synthesis during adaptive response of yeast to various types of stress. These signatures differentiate two groups of adaptive responses with respect to how cells manage expenditure in protein biosynthesis. In one group, significant trends towards downregulation of large proteins and upregulation of small ones are observed. In the other group we find no such trends. These results are consistent with resource limitation being important in the evolution of the first group of stress responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by experiments on activity in neuronal cultures [J. Soriano, M. Rodr ́ıguez Mart́ınez, T. Tlusty, and E. Moses, Proc. Natl. Acad. Sci. 105, 13758 (2008)], we investigate the percolation transition and critical exponents of spatially embedded Erd̋os-Ŕenyi networks with degree correlations. In our model networks, nodes are randomly distributed in a two-dimensional spatial domain, and the connection probability depends on Euclidian link length by a power law as well as on the degrees of linked nodes. Generally, spatial constraints lead to higher percolation thresholds in the sense that more links are needed to achieve global connectivity. However, degree correlations favor or do not favor percolation depending on the connectivity rules. We employ two construction methods to introduce degree correlations. In the first one, nodes stay homogeneously distributed and are connected via a distance- and degree-dependent probability. We observe that assortativity in the resulting network leads to a decrease of the percolation threshold. In the second construction methods, nodes are first spatially segregated depending on their degree and afterwards connected with a distance-dependent probability. In this segregated model, we find a threshold increase that accompanies the rising assortativity. Additionally, when the network is constructed in a disassortative way, we observe that this property has little effect on the percolation transition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:Average energies of nuclear collective modes may be efficiently and accurately computed using a nonrelativistic constrained approach without reliance on a random phase approximation (RPA). Purpose: To extend the constrained approach to the relativistic domain and to establish its impact on the calibration of energy density functionals. Methods: Relativistic RPA calculations of the giant monopole resonance (GMR) are compared against the predictions of the corresponding constrained approach using two accurately calibrated energy density functionals. Results: We find excellent agreement at the 2% level or better between the predictions of the relativistic RPA and the corresponding constrained approach for magic (or semimagic) nuclei ranging from 16 O to 208 Pb. Conclusions: An efficient and accurate method is proposed for incorporating nuclear collective excitations into the calibration of future energy density functionals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new algorithm for blind inversion of Wiener systems is presented. The algorithm is based on minimization of mutual information of the output samples. This minimization is done through a Minimization-Projection (MP) approach, using a nonparametric “gradient” of mutual information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction : la Physiopathologie maternelle de la prééclampsie s'associe typiquement à un état inflammatoire systémique modéré. La protéine "high mobility group box 1" (HMGB-1) est une protéine nucléaire ubiquitaire. En cas de stress cellulaire, elle est relâchée dans le milieu extrace llua li re et peut ainsi exercer son activité pro-inflammatoire. En cas de prééclampsie, le liquide amniotique et le cytoplasme des cellules trophoblastiques contiennent des quantités anormalement élevées de HMGB-1, mais il n'est toujours pas universellement admis que ces concentrations se retrouvent dans le sang maternel. Méthodes : nous avons recruté 32 femmes au troisième trimestre de grossesse, 16 avec et 16 sans prééclampsie. Nous avons également observé 16 femmes non enceintes et en bonne santé, appariées selon l'âge avec les femmes enceintes. Nous avons mesuré la concentration sérique de HMGB-1 chez les femmes enceintes avant, puis 24-48 heures après leur accouchement, en utilisant un kit ELISA commercial. Le même dosage a été réalisé chez les femmes non enceintes, mais à une seule reprise, au moment de leur inclusion dans l'étude. Résultats : le jour de leur inclusion dans l'étude, la concentration médiane [intervalle interquartile] de HMGB-1 chez les femmes enceintes prééclamptiques était de 2.1 ng/ml [1.1 - 3.2], de 1.1 [1.0-1.2] chez les grossesses saines (p < 0.05 vs groupe prééclamptiques) et de 0.6 [0.5 - 0.8] chez les patientes non enceintes (p < 0.01 vs deux autres groupes). Pour les deux groupes de femmes enceintes, les concentrations mesurées en post-partum ne variaient pas significativement de celles mesurées avant l'accouchement. Conclusion : avec ou sans prééclampsie, le troisième triemstre de la grossesse est associé à une élévation des taux circulants de HMGB-1. Cette augmentation est exagérée en cas de prééclampsie. L'origine de ces concentrations élevées reste à déterminer, mais elle semble impliquer d'autres organes que le placenta lui-même.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Coronary artery disease (CAD) continues to be one of the top public health burden. Perfusion cardiovascular magnetic resonance (CMR) is generally accepted to detect CAD, while data on its cost effectiveness are scarce. Therefore, the goal of the study was to compare the costs of a CMR-guided strategy vs two invasive strategies in a large CMR registry. METHODS: In 3'647 patients with suspected CAD of the EuroCMR-registry (59 centers/18 countries) costs were calculated for diagnostic examinations (CMR, X-ray coronary angiography (CXA) with/without FFR), revascularizations, and complications during a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive CXA and revascularization at the discretion of the treating physician (=CMR + CXA-strategy). In the hypothetical invasive arm, costs were calculated for an initial CXA and a FFR in vessels with ≥50 % stenoses (=CXA + FFR-strategy) and the same proportion of revascularizations and complications were applied as in the CMR + CXA-strategy. In the CXA-only strategy, costs included those for CXA and for revascularizations of all ≥50 % stenoses. To calculate the proportion of patients with ≥50 % stenoses, the stenosis-FFR relationship from the literature was used. Costs of the three strategies were determined based on a third payer perspective in 4 healthcare systems. RESULTS: Revascularizations were performed in 6.2 %, 4.5 %, and 12.9 % of all patients, patients with atypical chest pain (n = 1'786), and typical angina (n = 582), respectively; whereas complications (=all-cause death and non-fatal infarction) occurred in 1.3 %, 1.1 %, and 1.5 %, respectively. The CMR + CXA-strategy reduced costs by 14 %, 34 %, 27 %, and 24 % in the German, UK, Swiss, and US context, respectively, when compared to the CXA + FFR-strategy; and by 59 %, 52 %, 61 % and 71 %, respectively, versus the CXA-only strategy. In patients with typical angina, cost savings by CMR + CXA vs CXA + FFR were minimal in the German (2.3 %), intermediate in the US and Swiss (11.6 % and 12.8 %, respectively), and remained substantial in the UK (18.9 %) systems. Sensitivity analyses proved the robustness of results. CONCLUSIONS: A CMR + CXA-strategy for patients with suspected CAD provides substantial cost reduction compared to a hypothetical CXA + FFR-strategy in patients with low to intermediate disease prevalence. However, in the subgroup of patients with typical angina, cost savings were only minimal to moderate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current obesity prevention strategies recommend increasing daily physical activity, assuming that increased activity will lead to corresponding increases in total energy expenditure and prevent or reverse energy imbalance and weight gain [1-3]. Such Additive total energy expenditure models are supported by exercise intervention and accelerometry studies reporting positive correlations between physical activity and total energy expenditure [4] but are challenged by ecological studies in humans and other species showing that more active populations do not have higher total energy expenditure [5-8]. Here we tested a Constrained total energy expenditure model, in which total energy expenditure increases with physical activity at low activity levels but plateaus at higher activity levels as the body adapts to maintain total energy expenditure within a narrow range. We compared total energy expenditure, measured using doubly labeled water, against physical activity, measured using accelerometry, for a large (n = 332) sample of adults living in five populations [9]. After adjusting for body size and composition, total energy expenditure was positively correlated with physical activity, but the relationship was markedly stronger over the lower range of physical activity. For subjects in the upper range of physical activity, total energy expenditure plateaued, supporting a Constrained total energy expenditure model. Body fat percentage and activity intensity appear to modulate the metabolic response to physical activity. Models of energy balance employed in public health [1-3] should be revised to better reflect the constrained nature of total energy expenditure and the complex effects of physical activity on metabolic physiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.