977 resultados para Data Allocation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sex allocation data in eusocial Hymenoptera (ants, bees and wasps) provide an excellent opportunity to assess the effectiveness of kin selection, because queens and workers differ in their relatedness to females and males. The first studies on sex allocation in eusocial Hymenoptera compared population sex investment ratios across species. Female-biased investment in monogyne (= with single-queen colonies) populations of ants suggested that workers manipulate sex allocation according to their higher relatedness to females than males (relatedness asymmetry). However, several factors may confound these comparisons across species. First, variation in relatedness asymmetry is typically associated with major changes in breeding system and life history that may also affect sex allocation. Secondly, the relative cost of females and males is difficult to estimate across sexually dimorphic taxa, such as ants. Thirdly, each species in the comparison may not represent an independent data point, because of phylogenetic relationships among species. Recently, stronger evidence that workers control sex allocation has been provided by intraspecific studies of sex ratio variation across colonies. In several species of eusocial Hymenoptera, colonies with high relatedness asymmetry produced mostly females, in contrast to colonies with low relatedness asymmetry which produced mostly males. Additional signs of worker control were found by investigating proximate mechanisms of sex ratio manipulation in ants and wasps. However, worker control is not always effective, and further manipulative experiments will be needed to disentangle the multiple evolutionary factors and processes affecting sex allocation in eusocial Hymenoptera.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper provides new evidence on the determinants of the allocation of the US federal budget to the states and tests the capability of congressional, electoral and partisan theories to explain such allocation. We find that socio-economic characteristics are important explanatory variables but are not sufficient to explain the disparities in the distribution of federal monies. First, prestige committee membership is not conducive to pork-barrelling. We do not find any evidence that marginal states receive more funding; on the opposite, safe states tend to be rewarded. Also, states that are historically "swing" in presidential elections tend to receive more funds. Finally, we find strong evidence supporting partisan theories of budget allocation. States whose governor has the same political affiliation of the President receive more federal funds; while states whose representatives belong to a majority opposing the president party receive less funds.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper a utilization of the high data-rates channels by threading of sending and receiving is studied. As a communication technology evolves the higher speeds are used more and more in various applications. But generating traffic with Gbps data-rates also brings some complications. Especially if UDP protocol is used and it is necessary to avoid packet fragmentation, for example for high-speed reliable transport protocols based on UDP. For such situation the Ethernet network packet size has to correspond to standard 1500 bytes MTU[1], which is widely used in the Internet. System may not has enough capacity to send messages with necessary rate in a single-threaded mode. A possible solution is to use more threads. It can be efficient on widespread multicore systems. Also the fact that in real network non-constant data flow can be expected brings another object of study –- an automatic adaptation to the traffic which is changing during runtime. Cases investigated in this paper include adjusting number of threads to a given speed and keeping speed on a given rate when CPU gets heavily loaded by other processes while sending data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compared four strategies for inviting 91,456 women aged 50-69 years to one of six clinics for mammography screening and 40,142 men aged 60-79 years to one of 10 clinics for abdominal aortic aneurysm (AAA) screening. The strategies were invitation to the clinic nearest to the client and invitation to the clinic nearest to the client's area of residence defined by census small area, postcode and local government area. For each strategy we calculated the expected demand at each clinic and the travel distances for clients. We found that when women were allocated to mammography clinics on the basis of the local government area instead of their individual address, expected demand at one clinic increased by 60%, and 19% of clients were invited to attend a more remote clinic, entailing 99,000 km of additional travel. Similar results were obtained for men allocated to AAA clinics by their postcode of residence instead of their individual address: 55% difference in expected demand, 13% to a more remote clinic and 60,000 km of extra travel. Allocation on the basis of small areas did not show such great differences, except for travel distance, which was about 5% higher for each clinic type. We recommend that allocation of clients to screening clinics be made according to residential address, that assessment of the location of clinics be based on distances between residences and nearest clinic, but that planning new locations for clinics be aided with spatial analysis tools using small area demographic and social data. (C) 1997 Elsevier Science Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To analyze whether gender influence survival results of kidney transplant grafts and patients.METHODS Systematic review with meta-analysis of cohort studies available on Medline (PubMed), LILACS, CENTRAL, and Embase databases, including manual searching and in the grey literature. The selection of studies and the collection of data were conducted twice by independent reviewers, and disagreements were settled by a third reviewer. Graft and patient survival rates were evaluated as effectiveness measurements. Meta-analysis was conducted with the Review Manager® 5.2 software, through the application of a random effects model. Recipient, donor, and donor-recipient gender comparisons were evaluated.RESULTS : Twenty-nine studies involving 765,753 patients were included. Regarding graft survival, those from male donors were observed to have longer survival rates as compared to the ones from female donors, only regarding a 10-year follow-up period. Comparison between recipient genders was not found to have significant differences on any evaluated follow-up periods. In the evaluation between donor-recipient genders, male donor-male recipient transplants were favored in a statistically significant way. No statistically significant differences were observed in regards to patient survival for gender comparisons in all follow-up periods evaluated.CONCLUSIONS The quantitative analysis of the studies suggests that donor or recipient genders, when evaluated isolatedly, do not influence patient or graft survival rates. However, the combination between donor-recipient genders may be a determining factor for graft survival.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liver transplantation is now the standard treatment for end-stage liver disease. Given the shortage of liver donors and the progressively higher number of patients waiting for transplantation, improvements in patient selection and optimization of timing for transplantation are needed. Several solutions have been suggested, including increasing the donor pool; a fair policy for allocation, not permitting variables such as age, gender, and race, or third-party payer status to play any role; and knowledge of the natural history of each liver disease for which transplantation is offered. To observe ethical rules and distributive justice (guarantee to every citizen the same opportunity to get an organ), the "sickest first" policy must be used. Studies have demonstrated that death has no relationship with waiting time, but rather with the severity of liver disease at the time of inclusion. Thus, waiting time is no longer part of the United Network for Organ Sharing distribution criteria. Waiting time only differentiates between equally severely diseased patients. The authors have analyzed the waiting list mortality and 1-year survival for patients of the State of São Paulo, from July 1997 through January 2001. Only the chronological criterion was used. According to "Secretaria de Estado da Saúde de São Paulo" data, among all waiting list deaths, 82.2% occurred within the first year, and 37.6% within the first 3 months following inclusion. The allocation of livers based on waiting time is neither fair nor ethical, impairs distributive justice and human rights, and does not occur in any other part of the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dispersion of returns has gained a lot of attention as a measure to distinguish good and bad investment opportunities time. In the following dissertation, the cross-sectional returns volatility is analyzed over a fifteen year period across the S&P100 Index composition. The main inference drawn from the data sample is that the canonical measure of dispersion is highly macro-risk driven and therefore more biased towards returns volatility rather than its correlation component.