942 resultados para Parallel programming model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using the risk measure CV aR in �nancial analysis has become more and more popular recently. In this paper we apply CV aR for portfolio optimization. The problem is formulated as a two-stage stochastic programming model, and the SRA algorithm, a recently developed heuristic algorithm, is applied for minimizing CV aR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A CV aR kockázati mérték egyre nagyobb jelentőségre tesz szert portfóliók kockázatának megítélésekor. A portfolió egészére a CVaR kockázati mérték minimalizálását meg lehet fogalmazni kétlépcsős sztochasztikus feladatként. Az SRA algoritmus egy mostanában kifejlesztett megoldó algoritmus sztochasztikus programozási feladatok optimalizálására. Ebben a cikkben az SRA algoritmussal oldottam meg CV aR kockázati mérték minimalizálást. ___________ The risk measure CVaR is becoming more and more popular in recent years. In this paper we use CVaR for portfolio optimization. We formulate the problem as a two-stage stochastic programming model. We apply the SRA algorithm, which is a recently developed heuristic algorithm, to minimizing CVaR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bus stops are key links in the journeys of transit patrons with disabilities. Inaccessible bus stops prevent people with disabilities from using fixed-route bus services, thus limiting their mobility. The Americans with Disabilities Act (ADA) of 1990 prescribes the minimum requirements for bus stop accessibility by riders with disabilities. Due to limited budgets, transit agencies can only select a limited number of bus stop locations for ADA improvements annually. These locations should preferably be selected such that they maximize the overall benefits to patrons with disabilities. In addition, transit agencies may also choose to implement the universal design paradigm, which involves higher design standards than current ADA requirements and can provide amenities that are useful for all riders, like shelters and lighting. Many factors can affect the decision to improve a bus stop, including rider-based aspects like the number of riders with disabilities, total ridership, customer complaints, accidents, deployment costs, as well as locational aspects like the location of employment centers, schools, shopping areas, and so on. These interlacing factors make it difficult to identify optimum improvement locations without the aid of an optimization model. This dissertation proposes two integer programming models to help identify a priority list of bus stops for accessibility improvements. The first is a binary integer programming model designed to identify bus stops that need improvements to meet the minimum ADA requirements. The second involves a multi-objective nonlinear mixed integer programming model that attempts to achieve an optimal compromise among the two accessibility design standards. Geographic Information System (GIS) techniques were used extensively to both prepare the model input and examine the model output. An analytic hierarchy process (AHP) was applied to combine all of the factors affecting the benefits to patrons with disabilities. An extensive sensitivity analysis was performed to assess the reasonableness of the model outputs in response to changes in model constraints. Based on a case study using data from Broward County Transit (BCT) in Florida, the models were found to produce a list of bus stops that upon close examination were determined to be highly logical. Compared to traditional approaches using staff experience, requests from elected officials, customer complaints, etc., these optimization models offer a more objective and efficient platform on which to make bus stop improvement suggestions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Infrastructure management agencies are facing multiple challenges, including aging infrastructure, reduction in capacity of existing infrastructure, and availability of limited funds. Therefore, decision makers are required to think innovatively and develop inventive ways of using available funds. Maintenance investment decisions are generally made based on physical condition only. It is important to understand that spending money on public infrastructure is synonymous with spending money on people themselves. This also requires consideration of decision parameters, in addition to physical condition, such as strategic importance, socioeconomic contribution and infrastructure utilization. Consideration of multiple decision parameters for infrastructure maintenance investments can be beneficial in case of limited funding. Given this motivation, this dissertation presents a prototype decision support framework to evaluate trade-off, among competing infrastructures, that are candidates for infrastructure maintenance, repair and rehabilitation investments. Decision parameters' performances measured through various factors are combined to determine the integrated state of an infrastructure using Multi-Attribute Utility Theory (MAUT). The integrated state, cost and benefit estimates of probable maintenance actions are utilized alongside expert opinion to develop transition probability and reward matrices for each probable maintenance action for a particular candidate infrastructure. These matrices are then used as an input to the Markov Decision Process (MDP) for the finite-stage dynamic programming model to perform project (candidate)-level analysis to determine optimized maintenance strategies based on reward maximization. The outcomes of project (candidate)-level analysis are then utilized to perform network-level analysis taking the portfolio management approach to determine a suitable portfolio under budgetary constraints. The major decision support outcomes of the prototype framework include performance trend curves, decision logic maps, and a network-level maintenance investment plan for the upcoming years. The framework has been implemented with a set of bridges considered as a network with the assistance of the Pima County DOT, AZ. It is expected that the concept of this prototype framework can help infrastructure management agencies better manage their available funds for maintenance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CD4+ T cells play a crucial in the adaptive immune system. They function as the central hub to orchestrate the rest of immunity: CD4+ T cells are essential governing machinery in antibacterial and antiviral responses by facilitating B cell affinity maturation and coordinating the innate and adaptive immune systems to boost the overall immune outcome; on the contrary, hyperactivation of the inflammatory lineages of CD4+ T cells, as well as the impairments of suppressive CD4+ regulatory T cells, are the etiology of various autoimmunity and inflammatory diseases. The broad role of CD4+ T cells in both physiological and pathological contexts prompted me to explore the modulation of CD4+ T cells on the molecular level.

microRNAs (miRNAs) are small RNA molecules capable of regulating gene expression post-transcriptionally. miRNAs have been shown to exert substantial regulatory effects on CD4+ T cell activation, differentiation and helper function. Specifically, my lab has previously established the function of the miR-17-92 cluster in Th1 differentiation and anti-tumor responses. Here, I further analyzed the role of this miRNA cluster in Th17 differentiation, specifically, in the context of autoimmune diseases. Using both gain- and loss-of-function approaches, I demonstrated that miRNAs in miR-17-92, specifically, miR-17 and miR-19b in this cluster, is a crucial promoter of Th17 differentiation. Consequently, loss of miR-17-92 expression in T cells mitigated the progression of experimental autoimmune encephalomyelitis and T cell-induced colitis. In combination with my previous data, the molecular dissection of this cluster establishes that miR-19b and miR-17 play a comprehensive role in promoting multiple aspects of inflammatory T cell responses, which underscore them as potential targets for oligonucleotide-based therapy in treating autoimmune diseases.

To systematically study miRNA regulation in effector CD4+ T cells, I devised a large-scale miRNAome profiling to track in vivo miRNA changes in antigen-specific CD4+ T cells activated by Listeria challenge. From this screening, I identified that miR-23a expression tightly correlates with CD4+ effector expansion. Ectopic expression and genetic deletion strategies validated that miR-23a was required for antigen-stimulated effector CD4+ T cell survival in vitro and in vivo. I further determined that miR-23a targets Ppif, a gatekeeper of mitochondrial reactive oxygen species (ROS) release that protects CD4+ T cells from necrosis. Necrosis is a type of cell death that provokes inflammation, and it is prominently triggered by ROS release and its consequent oxidative stress. My finding that miR-23a curbs ROS-mediated necrosis highlights the essential role of this miRNA in maintaining immune homeostasis.

A key feature of miRNAs is their ability to modulate different biological aspects in different cell populations. Previously, my lab found that miR-23a potently suppresses CD8+ T cell cytotoxicity by restricting BLIMP1 expression. Since BLIMP1 has been found to inhibit T follicular helper (Tfh) differentiation by antagonizing the master transcription factor BCL6, I investigated whether miR-23a is also involved in Tfh differentiation. However, I found that miR-23a does not target BLIMP1 in CD4+ T cells and loss of miR-23a even fostered Tfh differentiation. This data indicate that miR-23a may target other pathways in CD4+ T cells regarding the Tfh differentiation pathway.

Although the lineage identity and regulatory networks for Tfh cells have been defined, the differentiation path of Tfh cells remains elusive. Two models have been proposed to explain the differentiation process of Tfh cells: in the parallel differentiation model, the Tfh lineage is segregated from other effector lineages at the early stage of antigen activation; alternatively, the sequential differentiation model suggests that naïve CD4+ T cells first differentiate into various effector lineages, then further program into Tfh cells. To address this question, I developed a novel in vitro co-culture system that employed antigen-specific CD4+ T cells, naïve B cells presenting cognate T cell antigen and BAFF-producing feeder cells to mimic germinal center. Using this system, I were able to robustly generate GC-like B cells. Notably, well-differentiated Th1 or Th2 effector cells also quickly acquired Tfh phenotype and function during in vitro co-culture, which suggested a sequential differentiation path for Tfh cells. To examine this path in vivo, under conditions of classical Th1- or Th2-type immunizations, I employed a TCRβ repertoire sequencing technique to track the clonotype origin of Tfh cells. Under both Th1- and Th2- immunization conditions, I observed profound repertoire overlaps between the Teff and Tfh populations, which strongly supports the proposed sequential differentiation model. Therefore, my studies establish a new platform to conveniently study Tfh-GC B cell interactions and provide insights into Tfh differentiation processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of atherosclerosis in the aorta is associated with low and oscillatory wall shear stress for normal patients. Moreover, localized differences in wall shear stress heterogeneity have been correlated with the presence of complex plaques in the descending aorta. While it is known that coarctation of the aorta can influence indices of wall shear stress, it is unclear how the degree of narrowing influences resulting patterns. We hypothesized that the degree of coarctation would have a strong influence on focal heterogeneity of wall shear stress. To test this hypothesis, we modeled the fluid dynamics in a patient-specific aorta with varied degrees of coarctation. We first validated a massively parallel computational model against experimental results for the patient geometry and then evaluated local shear stress patterns for a range of degrees of coarctation. Wall shear stress patterns at two cross sectional slices prone to develop atherosclerotic plaques were evaluated. Levels at different focal regions were compared to the conventional measure of average circumferential shear stress to enable localized quantification of coarctation-induced shear stress alteration. We find that the coarctation degree causes highly heterogeneous changes in wall shear stress.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we advocate the Loop-of-stencil-reduce pattern as a way to simplify the parallel programming of heterogeneous platforms (multicore+GPUs). Loop-of-Stencil-reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop. It transparently targets (by using OpenCL) combinations of CPU cores and GPUs, and it makes it possible to simplify the deployment of a single stencil computation kernel on different GPUs. The paper discusses the implementation of Loop-of-stencil-reduce within the FastFlow parallel framework, considering a simple iterative data-parallel application as running example (Game of Life) and a highly effective parallel filter for visual data restoration to assess performance. Thanks to the high-level design of the Loop-of-stencil-reduce, it was possible to run the filter seamlessly on a multicore machine, on multi-GPUs, and on both.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The astonishing development of diverse and different hardware platforms is twofold: on one side, the challenge for the exascale performance for big data processing and management; on the other side, the mobile and embedded devices for data collection and human machine interaction. This drove to a highly hierarchical evolution of programming models. GVirtuS is the general virtualization system developed in 2009 and firstly introduced in 2010 enabling a completely transparent layer among GPUs and VMs. This paper shows the latest achievements and developments of GVirtuS, now supporting CUDA 6.5, memory management and scheduling. Thanks to the new and improved remoting capabilities, GVirtus now enables GPU sharing among physical and virtual machines based on x86 and ARM CPUs on local workstations,computing clusters and distributed cloud appliances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an integer programming model for developing optimal shift schedules while allowing extensive flexibility in terms of alternate shift starting times, shift lengths, and break placement. The model combines the work of Moondra (1976) and Bechtold and Jacobs (1990) by implicitly matching meal breaks to implicitly represented shifts. Moreover, the new model extends the work of these authors to enable the scheduling of overtime and the scheduling of rest breaks. We compare the new model to Bechtold and Jacobs' model over a diverse set of 588 test problems. The new model generates optimal solutions more rapidly, solves problems with more shift alternatives, and does not generate schedules violating the operative restrictions on break timing.