836 resultados para Resource allocation.
Resumo:
According to climate models, drier summers must be expected more frequently in Central Europe during the next decades, which may influence plant performance and competition in grassland. The overall source–sink relations in plants, especially allocation of solutes to above- and below-ground parts, may be affected by drought. To investigate solute export from a given leaf of broadleaf dock, a solution containing 57Co and 65Zn was introduced through a leaf flap. The export from this leaf was detected by analysing radionuclide contents in various plant parts. Less label was allocated to new leaves and more to roots under drought. The observed alterations of source–sink relations in broadleaf dock were reversible during a subsequent short period of rewatering. These findings suggest an increased resource allocation to roots under drought improving the functionality of the plants.
Resumo:
The performance of tasks that are perceived as unnecessary or unreasonable, illegitimate tasks, represents a new stressor concept that refers to assignments that violate the norms associated with the role requirements of professional work. Research has shown that illegitimate tasks are associated with stress and counterproductive work behaviour. The purpose of this study was to provide insight into the contribution of characteristics of the organization on the prevalence of illegitimate tasks in the work of frontline and middle managers. Using the Bern Illegitimate Task Scale (BITS) in a sample of 440 local government operations managers in 28 different organizations in Sweden, this study supports the theoretical assumptions that illegitimate tasks are positively related to stress and negatively related to satisfaction with work performance. Results further show that 10% of the variance in illegitimate tasks can be attributed to the organization where the managers work. Multilevel referential analysis showed that the more the organization was characterized by competition for resources between units, unfair and arbitrary resource allocation and obscure decisional structure, the more illegitimate tasks managers reported. These results should be valuable for strategic-level management since they indicate that illegitimate tasks can be counteracted by means of the organization of work.
Resumo:
Most commercial project management software packages include planning methods to devise schedules for resource-constrained projects. As it is proprietary information of the software vendors which planning methods are implemented, the question arises how the software packages differ in quality with respect to their resource-allocation capabilities. We experimentally evaluate the resource-allocation capabilities of eight recent software packages by using 1,560 instances with 30, 60, and 120 activities of the well-known PSPLIB library. In some of the analyzed packages, the user may influence the resource allocation by means of multi-level priority rules, whereas in other packages, only few options can be chosen. We study the impact of various complexity parameters and priority rules on the project duration obtained by the software packages. The results indicate that the resource-allocation capabilities of these packages differ significantly. In general, the relative gap between the packages gets larger with increasing resource scarcity and with increasing number of activities. Moreover, the selection of the priority rule has a considerable impact on the project duration. Surprisingly, when selecting a priority rule in the packages where it is possible, both the mean and the variance of the project duration are in general worse than for the packages which do not offer the selection of a priority rule.
Resumo:
For executing the activities of a project, one or several resources are required, which are in general scarce. Many resource-allocation methods assume that the usage of these resources by an activity is constant during execution; in practice, however, the project manager may vary resource usage by individual activities over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and precedence and various work-content-related constraints are met.
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
This study examined whether the effectiveness of human resource management (HRM)practices is contingent on organizational climate and competitive strategy The concepts of internol and external fit suggest that the positive relationship between HRM and subsequent productivity will be stronger for firms with a positive organizational climate and for firms using differentiation strategies. Resource allocation theories of motivation, on the other hand, predict that the relationship between HRM and productivity will be stronger for firms with a poor climate because employees working in these firms should have the greatest amount of spare capacity. The results supported the resource allocation argument.
Resumo:
This study examined whether the effectiveness of human resource management (HRM) practices is contingent on organizational climate and competitive strategy. The concepts of internal and external fit suggest that the positive relationship between HRM and subsequent productivity will be stronger for firms with a positive organizational climate and for firms using differentiation strategies. Resource allocation theories of motivation, on the other hand, predict that the relationship between HRM and productivity will be stronger for firms with a poor climate because employees working in these firms should have the greatest amount of spare capacity. The results supported the resource allocation argument. © 2005 Southern Management Association. All rights reserved.
Resumo:
In future massively distributed service-based computational systems, resources will span many locations, organisations and platforms. In such systems, the ability to allocate resources in a desired configuration, in a scalable and robust manner, will be essential.We build upon a previous evolutionary market-based approach to achieving resource allocation in decentralised systems, by considering heterogeneous providers. In such scenarios, providers may be said to value their resources differently. We demonstrate how, given such valuations, the outcome allocation may be predicted. Furthermore, we describe how the approach may be used to achieve a stable, uneven load-balance of our choosing. We analyse the system's expected behaviour, and validate our predictions in simulation. Our approach is fully decentralised; no part of the system is weaker than any other. No cooperation between nodes is assumed; only self-interest is relied upon. A particular desired allocation is achieved transparently to users, as no modification to the buyers is required.
Resumo:
Purpose – The purpose of this research is to study the perceived impact of some factors on the resources allocation processes of the Nigerian universities and to suggest a framework that will help practitioners and academics to understand and improve such processes. Design/methodology/approach – The study adopted the interpretive qualitative approach aimed at an ‘in-depth’ understanding of the resource allocation experiences of key university personnel and their perceived impact of the contextual factors affecting such processes. The analysis of individual narratives from each university established the conditions and factors impacting the resources allocation processes within each institution. Findings – The resources allocation process issues in the Nigerian universities may be categorised into people (core and peripheral units’ challenge, and politics and power); process (resources allocation processes); and resources (critical financial shortage and resources dependence response). The study also provides insight that resourcing efficiency in Nigerian universities appears strongly constrained by the rivalry among the resource managers. The efficient resources allocation process (ERAP) model is proposed to resolve the identified resourcing deficiencies. Research limitations/implications – The research is not focused to provide generalizable observations but ‘in-depth’ perceived factors and their impact on the resources allocation processes in Nigerian universities. The study is limited to the internal resources allocation issues within the universities and excludes the external funding factors. The resource managers’ responses to the identified factors may affect their internal resourcing efficiency. Further research using more empirical samples is required to obtain more widespread results and the implications for all universities. Originality/value – This study contributes a fresh literature framework to resources allocation processes focusing at ‘people’, ‘process’ and ‘resources’. Also a middle range theory triangulation is developed in relation to better understanding of resourcing process management. The study will be of interest to university managers and policy makers.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
The degree of reliance of newborn sharks on energy reserves from maternal resource allocation and the timescales over which these animals develop foraging skills are critical factors towards understanding the ecological role of top predators in marine ecosystems. We used muscle tissue stable carbon isotopic composition and fatty acid analysis of bull sharks Carcharhinus leucas to investigate early-life feeding ecology in conjunction with maternal resource dependency. Values of δ13C of some young-of-the-year sharks were highly enriched, reflecting inputs from the marine-based diet and foraging locations of their mothers. This group of sharks also contained high levels of the 20:3ω9 fatty acid, which accumulates during periods of essential fatty acid deficiency, suggesting inadequate or undeveloped foraging skills and possible reliance on maternal provisioning. A loss of maternal signal in δ13C values occurred at a length of approximately 100 cm, with muscle tissue δ13C values reflecting a transition from more freshwater/estuarine-based diets to marine-based diets with increasing length. Similarly, fatty acids from sharks >100 cm indicated no signs of essential fatty acid deficiency, implying adequate foraging. By combining stable carbon isotopes and fatty acids, our results provided important constraints on the timing of the loss of maternal isotopic signal and the development of foraging skills in relation to shark size and imply that molecular markers such as fatty acids are useful for the determination of maternal resource dependency.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^