918 resultados para Multicast Packing Problem. Multiobjective Optimization. Network Optimization. Multicast
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
This paper presents the Juste-Neige system for predicting the snow height on the ski runs of a resort using a multi-agent simulation software. Its aim is to facilitate snow cover management in order to i) reduce the production cost of artificial snow and to improve the profit margin for the companies managing the ski resorts; and ii) to reduce the water and energy consumption, and thus to reduce the environmental impact, by producing only the snow needed for a good skiing experience. The software provides maps with the predicted snow heights for up to 13 days. On these maps, the areas most exposed to snow erosion are highlighted. The software proceeds in three steps: i) interpolation of snow height measurements with a neural network; ii) local meteorological forecasts for every ski resort; iii) simulation of the impact caused by skiers using a multi-agent system. The software has been evaluated in the Swiss ski resort of Verbier and provides useful predictions.
Resumo:
Tractography is a class of algorithms aiming at in vivo mapping the major neuronal pathways in the white matter from diffusion magnetic resonance imaging (MRI) data. These techniques offer a powerful tool to noninvasively investigate at the macroscopic scale the architecture of the neuronal connections of the brain. However, unfortunately, the reconstructions recovered with existing tractography algorithms are not really quantitative even though diffusion MRI is a quantitative modality by nature. As a matter of fact, several techniques have been proposed in recent years to estimate, at the voxel level, intrinsic microstructural features of the tissue, such as axonal density and diameter, by using multicompartment models. In this paper, we present a novel framework to reestablish the link between tractography and tissue microstructure. Starting from an input set of candidate fiber-tracts, which are estimated from the data using standard fiber-tracking techniques, we model the diffusion MRI signal in each voxel of the image as a linear combination of the restricted and hindered contributions generated in every location of the brain by these candidate tracts. Then, we seek for the global weight of each of them, i.e., the effective contribution or volume, such that they globally fit the measured signal at best. We demonstrate that these weights can be easily recovered by solving a global convex optimization problem and using efficient algorithms. The effectiveness of our approach has been evaluated both on a realistic phantom with known ground-truth and in vivo brain data. Results clearly demonstrate the benefits of the proposed formulation, opening new perspectives for a more quantitative and biologically plausible assessment of the structural connectivity of the brain.
Resumo:
Undernutrition is a widespread problem in intensive care unit and is associated with a worse clinical outcome. A state of negative energy balance increases stress catabolism and is associated with increased morbidity and mortality in ICU patients. Undernutrition-related increased morbidity is correlated with an increase in the length of hospital stay and health care costs. Enteral nutrition is the recommended feeding route in critically ill patients, but it is often insufficient to cover the nutritional needs. The initiation of supplemental parenteral nutrition, when enteral nutrition is insufficient, could optimize the nutritional therapy by preventing the onset of early energy deficiency, and thus, could allow to reduce morbidity, length of stay and costs, shorten recovery period and, finally, improve quality of life. (C) 2009 Elsevier Masson SAS. All rights reserved.
Resumo:
In order to successfully deploy multicast services in QoS-aware networks, pricing architectures must take into account the particular characteristics of multicast sessions. With this objective, we propose a charging scheme for QoS multicast services, assuming that the unicast cost of each interconnecting link is determined and that such cost is expressed in terms of quality of service (QoS) parameters. Our scheme allows determining the cost distribution of a multicast session along a cost distribution tree (CDT), and basing such distribution in those pre-existing unicast cost functions. The paper discusses in detail the main characteristics of the problem in a realistic interdomain scenario and how the proposed scheme would contribute to its solution
Resumo:
This paper presents a new charging scheme for cost distribution along a point-to-multipoint connection when destination nodes are responsible for the cost. The scheme focus on QoS considerations and a complete range of choices is presented. These choices go from a safe scheme for the network operator to a fair scheme to the customer. The in-between cases are also covered. Specific and general problems, like the incidence of users disconnecting dynamically is also discussed. The aim of this scheme is to encourage the users to disperse the resource demand instead of having a large number of direct connections to the source of the data, which would result in a higher than necessary bandwidth use from the source. This would benefit the overall performance of the network. The implementation of this task must balance between the necessity to offer a competitive service and the risk of not recovering such service cost for the network operator. Throughout this paper reference to multicast charging is made without making any reference to any specific category of service. The proposed scheme is also evaluated with the criteria set proposed in the European ATM charging project CANCAN
Resumo:
We propose a charging scheme for cost distribution along a multicast tree when cost is the responsibility of the receivers. This scheme focuses on QoS considerations and it does not depend on any specific type of service. The scheme has been designed to be used as a bridge between unicast and multicast services, solving the problem of charging multicast services by means of unicast charging and existing QoS routing mechanisms. We also include a numerical comparison and discussions of the case of non-numerical or relative QoS and on the application to some service examples in order to give a better understanding of the proposal
Resumo:
Individual-as-maximizing agent analogies result in a simple understanding of the functioning of the biological world. Identifying the conditions under which individuals can be regarded as fitness maximizing agents is thus of considerable interest to biologists. Here, we compare different concepts of fitness maximization, and discuss within a single framework the relationship between Hamilton's (J Theor Biol 7: 1-16, 1964) model of social interactions, Grafen's (J Evol Biol 20: 1243-1254, 2007a) formal Darwinism project, and the idea of evolutionary stable strategies. We distinguish cases where phenotypic effects are additive separable or not, the latter not being covered by Grafen's analysis. In both cases it is possible to define a maximand, in the form of an objective function phi(z), whose argument is the phenotype of an individual and whose derivative is proportional to Hamilton's inclusive fitness effect. However, this maximand can be identified with the expression for fecundity or fitness only in the case of additive separable phenotypic effects, making individual-as-maximizing agent analogies unattractive (although formally correct) under general situations of social interactions. We also feel that there is an inconsistency in Grafen's characterization of the solution of his maximization program by use of inclusive fitness arguments. His results are in conflict with those on evolutionary stable strategies obtained by applying inclusive fitness theory, and can be repaired only by changing the definition of the problem.
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
The development of CT applications might become a public health problem if no effort is made on the justification and the optimisation of the examinations. This paper presents some hints to assure that the risk-benefit compromise remains in favour of the patient, especially when one deals with the examinations of young patients. In this context a particular attention has to be made on the justification of the examination. When performing the acquisition one needs to optimise the extension of the volume investigated together with the number of acquisition sequences used. Finally, the use of automatic exposure systems, now available on all the units, and the use of the Diagnostic Reference Levels (DRL) should allow help radiologists to control the exposure of their patients.
Resumo:
Severe environmental conditions, coupled with the routine use of deicing chemicals and increasing traffic volume, tend to place extreme demands on portland cement concrete (PCC) pavements. In most instances, engineers have been able to specify and build PCC pavements that met these challenges. However, there have also been reports of premature deterioration that could not be specifically attributed to a single cause. Modern concrete mixtures have evolved to become very complex chemical systems. The complexity can be attributed to both the number of ingredients used in any given mixture and the various types and sources of the ingredients supplied to any given project. Local environmental conditions can also influence the outcome of paving projects. This research project investigated important variables that impact the homogeneity and rheology of concrete mixtures. The project consisted of a field study and a laboratory study. The field study collected information from six different projects in Iowa. The information that was collected during the field study documented cementitious material properties, plastic concrete properties, and hardened concrete properties. The laboratory study was used to develop baseline mixture variability information for the field study. It also investigated plastic concrete properties using various new devices to evaluate rheology and mixing efficiency. In addition, the lab study evaluated a strategy for the optimization of mortar and concrete mixtures containing supplementary cementitious materials. The results of the field studies indicated that the quality management concrete (QMC) mixtures being placed in the state generally exhibited good uniformity and good to excellent workability. Hardened concrete properties (compressive strength and hardened air content) were also satisfactory. The uniformity of the raw cementitious materials that were used on the projects could not be monitored as closely as was desired by the investigators; however, the information that was gathered indicated that the bulk chemical composition of most materials streams was reasonably uniform. Specific minerals phases in the cementitious materials were less uniform than the bulk chemical composition. The results of the laboratory study indicated that ternary mixtures show significant promise for improving the performance of concrete mixtures. The lab study also verified the results from prior projects that have indicated that bassanite is typically the major sulfate phase that is present in Iowa cements. This causes the cements to exhibit premature stiffening problems (false set) in laboratory testing. Fly ash helps to reduce the impact of premature stiffening because it behaves like a low-range water reducer in most instances. The premature stiffening problem can also be alleviated by increasing the water–cement ratio of the mixture and providing a remix cycle for the mixture.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
The network choice revenue management problem models customers as choosing from an offer-set, andthe firm decides the best subset to offer at any given moment to maximize expected revenue. The resultingdynamic program for the firm is intractable and approximated by a deterministic linear programcalled the CDLP which has an exponential number of columns. However, under the choice-set paradigmwhen the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has beenproposed but finding an entering column has been shown to be NP-hard. In this paper, starting with aconcave program formulation based on segment-level consideration sets called SDCP, we add a class ofconstraints called product constraints, that project onto subsets of intersections. In addition we proposea natural direct tightening of the SDCP called ?SDCP, and compare the performance of both methodson the benchmark data sets in the literature. Both the product constraints and the ?SDCP method arevery simple and easy to implement and are applicable to the case of overlapping segment considerationsets. In our computational testing on the benchmark data sets in the literature, SDCP with productconstraints achieves the CDLP value at a fraction of the CPU time taken by column generation and webelieve is a very promising approach for quickly approximating CDLP when segment consideration setsoverlap and the consideration sets themselves are relatively small.