889 resultados para Axiomatic Models of Resource Allocation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a fully Bayesian approach that simultaneously combines basic event and statistically independent higher event-level failure data in fault tree quantification. Such higher-level data could correspond to train, sub-system or system failure events. The full Bayesian approach also allows the highest-level data that are usually available for existing facilities to be automatically propagated to lower levels. A simple example illustrates the proposed approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present results of a benchmark test evaluating the resource allocation capabilities of the project management software packages Acos Plus.1 8.2, CA SuperProject 5.0a, CS Project Professional 3.0, MS Project 2000, and Scitor Project Scheduler 8.0.1. The tests are based on 1560 instances of precedence– and resource–constrained project scheduling problems. For different complexity scenarios, we analyze the deviation of the makespan obtained by the software packages from the best feasible makespan known. Among the tested software packages, Acos Plus.1 and Project Scheduler show the best resource allocation performance. Moreover, our numerical analysis reveals a considerable performance gap between the implemented methods and state–of–the–art project scheduling algorithms, especially for large–sized problems. Thus, there is still a significant potential for improving solutions to resource allocation problems in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper deals with batch scheduling problems in process industries where final products arise from several successive chemical or physical transformations of raw materials using multi–purpose equipment. In batch production mode, the total requirements of intermediate and final products are partitioned into batches. The production start of a batch at a given level requires the availability of all input products. We consider the problem of scheduling the production of given batches such that the makespan is minimized. Constraints like minimum and maximum time lags between successive production levels, sequence–dependent facility setup times, finite intermediate storages, production breaks, and time–varying manpower contribute to the complexity of this problem. We propose a new solution approach using models and methods of resource–constrained project scheduling, which (approximately) solves problems of industrial size within a reasonable amount of time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study focuses on the present-day surface elevation of the Greenland and Antarctic ice sheets. Based on 3 years of CryoSat-2 data acquisition we derived new elevation models (DEMs) as well as elevation change maps and volume change estimates for both ice sheets. Here we present the new DEMs and their corresponding error maps. The accuracy of the derived DEMs for Greenland and Antarctica is similar to those of previous DEMs obtained by satellite-based laser and radar altimeters. Comparisons with ICESat data show that 80% of the CryoSat-2 DEMs have an uncertainty of less than 3 m ± 15 m. The surface elevation change rates between January 2011 and January 2014 are presented for both ice sheets. We compared our results to elevation change rates obtained from ICESat data covering the time period from 2003 to 2009. The comparison reveals that in West Antarctica the volume loss has increased by a factor of 3. It also shows an anomalous thickening in Dronning Maud Land, East Antarctica which represents a known large-scale accumulation event. This anomaly partly compensates for the observed increased volume loss of the Antarctic Peninsula and West Antarctica. For Greenland we find a volume loss increased by a factor of 2.5 compared to the ICESat period with large negative elevation changes concentrated at the west and southeast coasts. The combined volume change of Greenland and Antarctica for the observation period is estimated to be -503 ± 107 km**3/yr. Greenland contributes nearly 75% to the total volume change with -375 ± 24 km**3/yr.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change, including ocean acidification (OA), presents fundamental challenges to marine biodiversity and sustained ecosystem health. We determined reproductive response (measured as naupliar production), cuticle composition and stage specific growth of the copepod Tisbe battagliai over three generations at four pH conditions (pH 7.67, 7.82, 7.95, and 8.06). Naupliar production increased significantly at pH 7.95 compared with pH 8.06 followed by a decline at pH 7.82. Naupliar production at pH 7.67 was higher than pH 7.82. We attribute the increase at pH 7.95 to an initial stress response which was succeeded by a hormesis-like response at pH 7.67. A multi-generational modelling approach predicted a gradual decline in naupliar production over the next 100 years (equivalent to approximately 2430 generations). There was a significant growth reduction (mean length integrated across developmental stage) relative to controls. There was a significant increase in the proportion of carbon relative to oxygen within the cuticle as seawater pH decreased. Changes in growth, cuticle composition and naupliar production strongly suggest that copepods subjected to OA-induced stress preferentially reallocate resources towards maintaining reproductive output at the expense of somatic growth and cuticle composition. These responses may drive shifts in life history strategies that favour smaller brood sizes, females and perhaps later maturing females, with the potential to profoundly destabilise marine trophodynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anthropogenic CO2 emission will lead to an increase in seawater pCO2 of up to 80-100 Pa (800-1000 µatm) within this century and to an acidification of the oceans. Green sea urchins (Strongylocentrotus droebachiensis) occurring in Kattegat experience seasonal hypercapnic and hypoxic conditions already today. Thus, anthropogenic CO2 emissions will add up to existing values and will lead to even higher pCO2 values >200 Pa (>2000 µatm). To estimate the green sea urchins' potential to acclimate to acidified seawater, we calculated an energy budget and determined the extracellular acid base status of adult S. droebachiensis exposed to moderately (102 to 145 Pa, 1007 to 1431 µatm) and highly (284 to 385 Pa, 2800 to 3800 µatm) elevated seawater pCO2 for 10 and 45 days. A 45 - day exposure to elevated pCO2 resulted in a shift in energy budgets, leading to reduced somatic and reproductive growth. Metabolic rates were not significantly affected, but ammonium excretion increased in response to elevated pCO2. This led to decreased O:N ratios. These findings suggest that protein metabolism is possibly enhanced under elevated pCO2 in order to support ion homeostasis by increasing net acid extrusion. The perivisceral coelomic fluid acid-base status revealed that S. droebachiensis is able to fully (intermediate pCO2) or partially (high pCO2) compensate extracellular pH (pHe) changes by accumulation of bicarbonate (maximum increases 2.5 mM), albeit at a slower rate than typically observed in other taxa (10 day duration for full pHe compensation). At intermediate pCO2, sea urchins were able to maintain fully compensated pHe for 45 days. Sea urchins from the higher pCO2 treatment could be divided into two groups following medium-term acclimation: one group of experimental animals (29%) contained remnants of food in their digestive system and maintained partially compensated pHe (+2.3 mM HCO3), while the other group (71%) exhibited an empty digestive system and a severe metabolic acidosis (-0.5 pH units, -2.4 mM HCO3). There was no difference in mortality between the three pCO2 treatments. The results of this study suggest that S. droebachiensis occurring in the Kattegat might be pre-adapted to hypercapnia due to natural variability in pCO2 in its habitat. We show for the first time that some echinoderm species can actively compensate extracellular pH. Seawater pCO2 values of >200 Pa, which will occur in the Kattegat within this century during seasonal hypoxic events, can possibly only be endured for a short time period of a few weeks. Increases in anthropogenic CO2 emissions and leakages from potential sub-seabed CO2 storage (CCS) sites thus impose a threat to the ecologically and economically important species S. droebachiensis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a technique to estimate accurate speedups for parallel logic programs with relative independence from characteristics of a given implementation or underlying parallel hardware. The proposed technique is based on gathering accurate data describing one execution at run-time, which is fed to a simulator. Alternative schedulings are then simulated and estimates computed for the corresponding speedups. A tool implementing the aforementioned techniques is presented, and its predictions are compared to the performance of real systems, showing good correlation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performance studies of actual parallel systems usually tend to concéntrate on the effectiveness of a given implementation. This is often done in the absolute, without quantitave reference to the potential parallelism contained in the programs from the point of view of the execution paradigm. We feel that studying the parallelism inherent to the programs is interesting, as it gives information about the best possible behavior of any implementation and thus allows contrasting the results obtained. We propose a method for obtaining ideal speedups for programs through a combination of sequential or parallel execution and simulation, and the algorithms that allow implementing the method. Our approach is novel and, we argüe, more accurate than previously proposed methods, in that a crucial part of the data - the execution times of tasks - is obtained from actual executions, while speedup is computed by simulation. This allows obtaining speedup (and other) data under controlled and ideal assumptions regarding issues such as number of processor, scheduling algorithm and overheads, etc. The results obtained can be used for example to evalúate the ideal parallelism that a program contains for a given model of execution and to compare such "perfect" parallelism to that obtained by a given implementation of that model. We also present a tool, IDRA, which implements the proposed method, and results obtained with IDRA for benchmark programs, which are then compared with those obtained in actual executions on real parallel systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Singular-value decomposition (SVD)-based multiple-input multiple output (MIMO) systems, where the whole MIMO channel is decomposed into a number of unequally weighted single-input single-output (SISO) channels, have attracted a lot of attention in the wireless community. The unequal weighting of the SISO channels has led to intensive research on bit- and power allocation even in MIMO channel situation with poor scattering conditions identified as the antennas correlation effect. In this situation, the unequal weighting of the SISO channels becomes even much stronger. In comparison to the SVD-assisted MIMO transmission, geometric mean decomposition (GMD)-based MIMO systems are able to compensate the drawback of weighted SISO channels when using SVD, where the decomposition result is nearly independent of the antennas correlation effect. The remaining interferences after the GMD-based signal processing can be easily removed by using dirty paper precoding as demonstrated in this work. Our results show that GMD-based MIMO transmission has the potential to significantly simplify the bit and power loading processes and outperforms the SVD-based MIMO transmission as long as the same QAM-constellation size is used on all equally-weighted SISO channels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Mouse Tumor Biology (MTB) Database serves as a curated, integrated resource for information about tumor genetics and pathology in genetically defined strains of mice (i.e., inbred, transgenic and targeted mutation strains). Sources of information for the database include the published scientific literature and direct data submissions by the scientific community. Researchers access MTB using Web-based query forms and can use the database to answer such questions as ‘What tumors have been reported in transgenic mice created on a C57BL/6J background?’, ‘What tumors in mice are associated with mutations in the Trp53 gene?’ and ‘What pathology images are available for tumors of the mammary gland regardless of genetic background?’. MTB has been available on the Web since 1998 from the Mouse Genome Informatics web site (http://www.informatics.jax.org). We have recently implemented a number of enhancements to MTB including new query options, redesigned query forms and results pages for pathology and genetic data, and the addition of an electronic data submission and annotation tool for pathology data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most important lessons learned during the 2008-09 financial crisis was that the informational toolbox on which policymakers base their decisions about competitiveness became outdated in terms of both data sources and data analysis. The toolbox is particularly outdated when it comes to tapping the potential of micro data for the analysis of competitiveness – a serious problem given that it is firms, rather than countries that compete on global markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a utilization of the high data-rates channels by threading of sending and receiving is studied. As a communication technology evolves the higher speeds are used more and more in various applications. But generating traffic with Gbps data-rates also brings some complications. Especially if UDP protocol is used and it is necessary to avoid packet fragmentation, for example for high-speed reliable transport protocols based on UDP. For such situation the Ethernet network packet size has to correspond to standard 1500 bytes MTU[1], which is widely used in the Internet. System may not has enough capacity to send messages with necessary rate in a single-threaded mode. A possible solution is to use more threads. It can be efficient on widespread multicore systems. Also the fact that in real network non-constant data flow can be expected brings another object of study –- an automatic adaptation to the traffic which is changing during runtime. Cases investigated in this paper include adjusting number of threads to a given speed and keeping speed on a given rate when CPU gets heavily loaded by other processes while sending data.