891 resultados para potential models
Resumo:
Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.
Resumo:
Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.
Resumo:
Background: In the recent years natural resources are being in focus due to their great potential to be exploited in the discovery/development of novel bioactive compounds and, among them, mushrooms can be highlighted as alternative sources of anti-inflammatory agents. Scope and approach: The present review reports the anti-inflammatory activity of mushroom extracts and of their bioactive metabolites involved in this bioactive action. Additionally the most common assays used to evaluate mushrooms anti-inflammatory activity were also reviewed, including in vitro studies in cell lines, as well as in animal models in vivo. Key findings and conclusions: The anti-inflammatory compounds identified in mushrooms include polysaccharides, terpenes, phenolic acids, steroids, fatty acids and other metabolites. Among them, polysaccharides, terpenoids and phenolic compounds seem to be the most important contributors to the anti-inflammatory activity of mushrooms as demonstrated by numerous studies. However, clinical trials need to be conducted in order to confirm the effectiveness of some of these mushroom compounds namely, inhibitors of NF-κB pathway and of cyclooxygenase related with the expression of many inflammatory mediators.
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
The U.S. Nuclear Regulatory Commission implemented a safety goal policy in response to the 1979 Three Mile Island accident. This policy addresses the question “How safe is safe enough?” by specifying quantitative health objectives (QHOs) for comparison with results from nuclear power plant (NPP) probabilistic risk analyses (PRAs) to determine whether proposed regulatory actions are justified based on potential safety benefit. Lessons learned from recent operating experience—including the 2011 Fukushima accident—indicate that accidents involving multiple units at a shared site can occur with non-negligible frequency. Yet risk contributions from such scenarios are excluded by policy from safety goal evaluations—even for the nearly 60% of U.S. NPP sites that include multiple units. This research develops and applies methods for estimating risk metrics for comparison with safety goal QHOs using models from state-of-the-art consequence analyses to evaluate the effect of including multi-unit accident risk contributions in safety goal evaluations.
Resumo:
This paper extends the symmetric/constrained fuzzy powerflow models by including the potential correlations between nodal injections. Therefore, the extension of the model allows the specification of fuzzy generation and load values and of potential correlations between nodal injections. The enhanced version of the symmetric/constrained fuzzy powerflow model is applied to the 30-bus IEEE test system. The results prove the importance of the inclusion of data correlations in the analysis of transmission system adequacy.
Resumo:
The diaphragm is the primary inspiratory pump muscle of breathing. Notwithstanding its critical role in pulmonary ventilation, the diaphragm like other striated muscles is malleable in response to physiological and pathophysiological stressors, with potential implications for the maintenance of respiratory homeostasis. This review considers hypoxic adaptation of the diaphragm muscle, with a focus on functional, structural, and metabolic remodeling relevant to conditions such as high altitude and chronic respiratory disease. On the basis of emerging data in animal models, we posit that hypoxia is a significant driver of respiratory muscle plasticity, with evidence suggestive of both compensatory and deleterious adaptations in conditions of sustained exposure to low oxygen. Cellular strategies driving diaphragm remodeling during exposure to sustained hypoxia appear to confer hypoxic tolerance at the expense of peak force-generating capacity, a key functional parameter that correlates with patient morbidity and mortality. Changes include, but are not limited to: redox-dependent activation of hypoxia-inducible factor (HIF) and MAP kinases; time-dependent carbonylation of key metabolic and functional proteins; decreased mitochondrial respiration; activation of atrophic signaling and increased proteolysis; and altered functional performance. Diaphragm muscle weakness may be a signature effect of sustained hypoxic exposure. We discuss the putative role of reactive oxygen species as mediators of both advantageous and disadvantageous adaptations of diaphragm muscle to sustained hypoxia, and the role of antioxidants in mitigating adverse effects of chronic hypoxic stress on respiratory muscle function.
Resumo:
Background: There is increasing awareness that regardless of the proven value of clinical interventions, the use of effective strategies to implement such interventions into clinical practice is necessary to ensure that patients receive the benefits. However, there is often confusion between what is the clinical intervention and what is the implementation intervention. This may be caused by a lack of conceptual clarity between 'intervention' and 'implementation', yet at other times by ambiguity in application. We suggest that both the scientific and the clinical communities would benefit from greater clarity; therefore, in this paper, we address the concepts of intervention and implementation, primarily as in clinical interventions and implementation interventions, and explore the grey area in between. Discussion: To begin, we consider the similarities, differences and potential greyness between clinical interventions and implementation interventions through an overview of concepts. This is illustrated with reference to two examples of clinical interventions and implementation intervention studies, including the potential ambiguity in between. We then discuss strategies to explore the hybridity of clinical-implementation intervention studies, including the role of theories, frameworks, models, and reporting guidelines that can be applied to help clarify the clinical and implementation intervention, respectively. Conclusion: Semantics provide opportunities for improved precision in depicting what is 'intervention' and what is 'implementation' in health care research. Further, attention to study design, the use of theory, and adoption of reporting guidelines can assist in distinguishing between the clinical intervention and the implementation intervention. However, certain aspects may remain unclear in analyses of hybrid studies of clinical and implementation interventions. Recognizing this potential greyness can inform further discourse.
Resumo:
Résumé : Les ions hydronium (H3O + ) sont formés, à temps courts, dans les grappes ou le long des trajectoires de la radiolyse de l'eau par des rayonnements ionisants à faible transfert d’énergie linéaire (TEL) ou à TEL élevé. Cette formation in situ de H3O + rend la région des grappes/trajectoires du rayonnement temporairement plus acide que le milieu environnant. Bien que des preuves expérimentales de l’acidité d’une grappe aient déjà été signalées, il n'y a que des informations fragmentaires quant à son ampleur et sa dépendance en temps. Dans ce travail, nous déterminons les concentrations en H3O + et les valeurs de pH correspondantes en fonction du temps à partir des rendements de H3O + calculés à l’aide de simulations Monte Carlo de la chimie intervenant dans les trajectoires. Quatre ions incidents de différents TEL ont été sélectionnés et deux modèles de grappe/trajectoire ont été utilisés : 1) un modèle de grappe isolée "sphérique" (faible TEL) et 2) un modèle de trajectoire "cylindrique" (TEL élevé). Dans tous les cas étudiés, un effet de pH acide brusque transitoire, que nous appelons un effet de "pic acide", est observé immédiatement après l’irradiation. Cet effet ne semble pas avoir été exploré dans l'eau ou un milieu cellulaire soumis à un rayonnement ionisant, en particulier à haut TEL. À cet égard, ce travail soulève des questions sur les implications possibles de cet effet en radiobiologie, dont certaines sont évoquées brièvement. Nos calculs ont ensuite été étendus à l’étude de l'influence de la température, de 25 à 350 °C, sur la formation in situ d’ions H3O + et l’effet de pic acide qui intervient à temps courts lors de la radiolyse de l’eau à faible TEL. Les résultats montrent une augmentation marquée de la réponse de pic acide à hautes températures. Comme de nombreux processus intervenant dans le cœur d’un réacteur nucléaire refroidi à l'eau dépendent de façon critique du pH, la question ici est de savoir si ces fortes variations d’acidité, même si elles sont hautement localisées et transitoires, contribuent à la corrosion et l’endommagement des matériaux.
Resumo:
Marine protected areas (MPAs) are today's most important tools for the spatial management and conservation of marine species. Yet, the true protection that they provide to individual fish is unknown, leading to uncertainty associated with MPA effectiveness. In this study, conducted in a recently established coastal MPA in Portugal, we combined the results of individual home range estimation and population distribution models for 3 species of commercial importance and contrasting life histories to infer (1) the size of suitable areas where they would be fully protected and (2) the vulnerability to fishing mortality of each species. Results show that the relationship between MPA size and effective protection is strongly modulated by both the species' home range and the distribution of suitable habitat inside and outside the MPA. This approach provides a better insight into the true potential of MPAs in effectively protecting marine species, since it can reveal the size and location of the areas where protection is most effective and a clear, quantitative estimation of the vulnerability to fishing throughout an entire MPA.
Resumo:
Tyrpsine kinase inhibitors (TKIs) effectively target progenitors and mature leukaemic cells but prove less effective at eliminating leukaemic stem cells (LSCs) in patients with chronic myeloid leukaemia (CML). Several reports indicate that the TGFβ superfamily pathway is important for LSC survival and quiescence. We conducted extensive microarray analyses to compare expression patterns in normal haemopoietic stem cells (HSC) and progenitors with CML LSC and progenitor populations in chronic phase (CP), accelerated phase (AP) and blast crisis (BC) CML. The BMP/SMAD pathway and downstream signalling molecules were identified as significantly deregulated in all three phases of CML. The changes observed could potentiate altered autocrine signalling, as BMP2, BMP4 (p<0.05), and ACTIVIN A (p<0.001) were all down regulated, whereas BMP7, BMP10 and TGFβ (p<0.05) were up regulated in CP. This was accompanied by up regulation of BMPRI (p<0.05) and downstream SMADs (p<0.005). Interestingly, as CML progressed, the profile altered, with BC patients showing significant over-expression of ACTIVIN A and its receptor ACVR1C. To further characterise the BMP pathway and identify potential candidate biomarkers within a larger cohort, expression analysis of 42 genes in 60 newly diagnosed CP CML patient samples, enrolled on a phase III clinical trial (www.spirit-cml.org) with greater than 12 months follow-up data on their response to TKI was performed. Analysis revealed that the pathway was highly deregulated, with no clear distinction when patients were stratified into good, intermediate and poor response to treatment. One of the major issues in developing new treatments to target LSCs is the ability to test small molecule inhibitors effectively as it is difficult to obtain sufficient LSCs from primary patient material. Using reprogramming technologies, we generated induced pluripotent stem cells (iPSCs) from CP CML patients and normal donors. CML- and normal-derived iPSCs were differentiated along the mesodermal axis to generate haemopoietic and endothelial precursors (haemangioblasts). IPSC-derived haemangioblasts exhibited sensitivity to TKI treatment with increased apoptosis and reduction in the phosphorylation of downstream target proteins. 4 Dual inhibition studies were performed using BMP pathway inhibitors in combination with TKI on CML cell lines, primary cells and patient derived iPSCs. Results indicate that they act synergistically to target CML cells both in the presence and absence of BMP4 ligand. Inhibition resulted in decreased proliferation, irreversible cell cycle arrest, increased apoptosis, reduced haemopoietic colony formation, altered gene expression pattern, reduction in self-renewal and a significant reduction in the phosphorylation of downstream target proteins. These changes offer a therapeutic window in CML, with intervention using BMP inhibitors in combination with TKI having the potential to prevent LSC self-renewal and improve outcome for patients. By successfully developing and validating iPSCs for CML drug screening we hope to substantially reduce the reliance on animal models for early preclinical drug screening in leukaemia.
Resumo:
Models based on species distributions are widely used and serve important purposes in ecology, biogeography and conservation. Their continuous predictions of environmental suitability are commonly converted into a binary classification of predicted (or potential) presences and absences, whose accuracy is then evaluated through a number of measures that have been the subject of recent reviews. We propose four additional measures that analyse observation-prediction mismatch from a different angle – namely, from the perspective of the predicted rather than the observed area – and add to the existing toolset of model evaluation methods. We explain how these measures can complete the view provided by the existing measures, allowing further insights into distribution model predictions. We also describe how they can be particularly useful when using models to forecast the spread of diseases or of invasive species and to predict modifications in species’ distributions under climate and land-use change
Resumo:
Distribution models are used increasingly for species conservation assessments over extensive areas, but the spatial resolution of the modeled data and, consequently, of the predictions generated directly from these models are usually too coarse for local conservation applications. Comprehensive distribution data at finer spatial resolution, however, require a level of sampling that is impractical for most species and regions. Models can be downscaled to predict distribution at finer resolutions, but this increases uncertainty because the predictive ability of models is not necessarily consistent beyond their original scale. We analyzed the performance of downscaled, previously published models of environmental favorability (a generalized linear modeling technique) for a restricted endemic insectivore, the Iberian desman (Galemys pyrenaicus), and a more widespread carnivore, the Eurasian otter ( Lutra lutra), in the Iberian Peninsula. The models, built from presence–absence data at 10 × 10 km resolution, were extrapolated to a resolution 100 times finer (1 × 1 km). We compared downscaled predictions of environmental quality for the two species with published data on local observations and on important conservation sites proposed by experts. Predictions were significantly related to observed presence or absence of species and to expert selection of sampling sites and important conservation sites. Our results suggest the potential usefulness of downscaled projections of environmental quality as a proxy for expensive and time-consuming field studies when the field studies are not feasible. This method may be valid for other similar species if coarse-resolution distribution data are available to define high-quality areas at a scale that is practical for the application of concrete conservation measures
Resumo:
Transferring distribution models between different geographical areas may be problematic, as the performance of models outside their original scope is hard to predict. A modelling procedure is needed that gets the gist of the environmental descriptors of a distribution area, without either overfitting to the training data or overestimating the species’ distribution potential.We tested the transferability power of the favourability function, a generalized linear model, on the distribution of the Iberian desman (Galemys pyrenaicus) in the Iberian territories of Portugal and Spain.We also tested the effects of two of the main potential constraints on model transferability: the analysed ranges of the predictor variables, and the completeness of the species distribution data. We modelled 10 km×10km presence/absence data from Portugal and Spain separately, extrapolated each model to the other country, and compared predictions with observations. The Spanish model, despite arguably containing more false absences, showed good predictive ability in Portugal. The Portuguese model, whose predictors ranged between only a subset of the values observed in Spain, overestimated desman distribution when transferred.We discuss possible reasons for this differential model behaviour, and highlight the importance of this kind of models for prediction and conservation applications
Resumo:
Little information is available on the degree of within-field variability of potential production of Tall wheatgrass (Thinopyrum ponticum) forage under unirrigated conditions. The aim of this study was to characterize the spatial variability of the accumulated biomass (AB) without nutritional limitations through vegetation indexes, and then use this information to determine potential management zones. A 27-×-27-m grid cell size was chosen and 84 biomass sampling areas (BSA), each 2 m(2) in size, were georeferenced. Nitrogen and phosphorus fertilizers were applied after an initial cut at 3 cm height. At 500 °C day, the AB from each sampling area, was collected and evaluated. The spatial variability of AB was estimated more accurately using the Normalized Difference Vegetation Index (NDVI), calculated from LANDSAT 8 images obtained on 24 November 2014 (NDVInov) and 10 December 2014 (NDVIdec) because the potential AB was highly associated with NDVInov and NDVIdec (r (2) = 0.85 and 0.83, respectively). These models between the potential AB data and NDVI were evaluated by root mean squared error (RMSE) and relative root mean squared error (RRMSE). This last coefficient was 12 and 15 % for NDVInov and NDVIdec, respectively. Potential AB and NDVI spatial correlation were quantified with semivariograms. The spatial dependence of AB was low. Six classes of NDVI were analyzed for comparison, and two management zones (MZ) were established with them. In order to evaluate if the NDVI method allows us to delimit MZ with different attainable yields, the AB estimated for these MZ were compared through an ANOVA test. The potential AB had significant differences among MZ. Based on these findings, it can be concluded that NDVI obtained from LANDSAT 8 images can be reliably used for creating MZ in soils under permanent pastures dominated by Tall wheatgrass.