931 resultados para Plant and Equipment-resource Allocation
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
According to climate models, drier summers must be expected more frequently in Central Europe during the next decades, which may influence plant performance and competition in grassland. The overall source–sink relations in plants, especially allocation of solutes to above- and below-ground parts, may be affected by drought. To investigate solute export from a given leaf of broadleaf dock, a solution containing 57Co and 65Zn was introduced through a leaf flap. The export from this leaf was detected by analysing radionuclide contents in various plant parts. Less label was allocated to new leaves and more to roots under drought. The observed alterations of source–sink relations in broadleaf dock were reversible during a subsequent short period of rewatering. These findings suggest an increased resource allocation to roots under drought improving the functionality of the plants.
Resumo:
Upon attack by leaf herbivores, many plants reallocate photoassimilates below ground. However, little is known about how plants respond when the roots themselves come under attack. We investigated induced resource allocation in maize plants that are infested by the larvae Western corn rootworm Diabrotica virgifera virgifera. Using radioactive 11CO2, we demonstrate that root-attacked maize plants allocate more new 11C carbon from source leaves to stems, but not to roots. Reduced meristematic activity and reduced invertase activity in attacked maize root systems are identified as possible drivers of this shoot reallocation response. The increased allocation of photoassimilates to stems is shown to be associated with a marked thickening of these tissues and increased growth of stem-borne crown roots. A strong quantitative correlation between stem thickness and root regrowth across different watering levels suggests that retaining photoassimilates in the shoots may help root-attacked plants to compensate for the loss of belowground tissues. Taken together, our results indicate that induced tolerance may be an important strategy of plants to withstand belowground attack. Furthermore, root herbivore-induced carbon reallocation needs to be taken into account when studying plant-mediated interactions between herbivores.
Resumo:
1.Leaf-herbivore attack often triggers induced resistance in plants. However, certain specialist herbivores can also take advantage of the induced metabolic changes. In some cases, they even manipulate plant resistance, leading to a phenomenon called induced susceptibility. Compared to above-ground plant-insect interactions, little is known about the prevalence and consequences of induced responses below-ground. 2.A recent study suggested that feeding by the specialist root herbivore Diabrotica virgifera virgifera makes maize roots more susceptible to conspecifics. To better understand this phenomenon, we conducted a series of experiments to study the behavioural responses and elucidate the underlying biochemical mechanisms. 3.We found that D. virgifera benefitted from feeding on a root system in groups of intermediate size (3–9 larvae/plant in the laboratory), whereas its performance was reduced in large groups (12 larvae/plant). Interestingly, the herbivore was able to select host plants with a suitable density of conspecifics by using the induced plant volatile (E)-β-caryophyllene in a dose-dependent manner. Using a split root experiment, we show that the plant-induced susceptibility is systemic and, therefore, plant mediated. Chemical analyses on plant resource reallocation and defences upon herbivory showed that the systemic induced-susceptibility is likely to stem from a combination of (i) increased free amino acid concentrations and (ii) relaxation of defence inducibility. 4.These findings show that herbivores can use induced plant volatiles in a density-dependent manner to aggregate on a host plant and change its metabolism to their own benefit. Our study furthermore helps to explain the remarkable ecological success of D. virgifera in maize fields around the world.
Resumo:
As a consequence of artificial selection for specific traits, crop plants underwent considerable genotypic and phenotypic changes during the process of domestication. These changes may have led to reduced resistance in the cultivated plant due to shifts in resource allocation from defensive traits to increased growth rates and yield. Modern maize (Zea mays ssp. mays) was domesticated from its ancestor Balsas teosinte (Z. mays ssp. parviglumis) approximately 9000 years ago. Although maize displays a high genetic overlap with its direct ancestor and other annual teosintes, several studies show that maize and its ancestors differ in their resistance phenotypes with teosintes being less susceptible to herbivore damage. However, the underlying mechanisms are poorly understood. Here we addressed the question to what extent maize domestication has affected two crucial chemical and one physical defence traits and whether differences in their expression may explain the differences in herbivore resistance levels. The ontogenetic trajectories of 1,4-benzoxazin-3-ones, maysin and leaf toughness were monitored for different leaf types across several maize cultivars and teosinte accessions during early vegetative growth stages. We found significant quantitative and qualitative differences in 1,4-benzoxazin-3-one accumulation in an initial pairwise comparison, but we did not find consistent differences between wild and cultivated genotypes during a more thorough examination employing several cultivars/accessions. Yet, 1,4-benzoxazin-3-one levels tended to decline more rapidly with plant age in the modern maize cultivars. Foliar maysin levels and leaf toughness increased with plant age in a leaf-specific manner, but were also unaffected by domestication. Based on our findings we suggest that defence traits other than the ones that were investigated are responsible for the observed differences in herbivore resistance between teosinte and maize. Furthermore, our results indicate that single pairwise comparisons may lead to false conclusions regarding the effects of domestication on defensive and possibly other traits.
Resumo:
We propose the use of the "infotaxis" search strategy as the navigation system of a robotic platform, able to search and localize infectious foci by detecting the changes in the profile of volatile organic compounds emitted by and infected plant. We builded a simple and cost effective robot platform that substitutes odour sensors in favour of light sensors and study their robustness and performance under non ideal conditions such as the exitence of obstacles due to land topology or weeds.
Resumo:
The more and more demanding conditions in the power generation sector requires to apply all the available technologies to optimize processes and reduce costs. An integrated asset management strategy, combining technical analysis and operation and maintenance management can help to improve plant performance, flexibility and reliability. In order to deploy such a model it is necessary to combine plant data and specific equipment condition information, with different systems devoted to analyze performance and equipment condition, and take advantage of the results to support operation and maintenance decisions. This model that has been dealt in certain detail for electricity transmission and distribution networks, is not yet broadly extended in the power generation sector, as proposed in this study for the case of a combined power plant. Its application would turn in direct benefit for the operation and maintenance and for the interaction to the energy market
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.
Resumo:
Conceptual frameworks of dryland degradation commonly include ecohydrological feedbacks between landscape spatial organization and resource loss, so that decreasing cover and size of vegetation patches result in higher water and soil losses, which lead to further vegetation loss. However, the impacts of these feedbacks on dryland dynamics in response to external stress have barely been tested. Using a spatially-explicit model, we represented feedbacks between vegetation pattern and landscape resource loss by establishing a negative dependence of plant establishment on the connectivity of runoff-source areas (e.g., bare soils). We assessed the impact of various feedback strengths on the response of dryland ecosystems to changing external conditions. In general, for a given external pressure, these connectivity-mediated feedbacks decrease vegetation cover at equilibrium, which indicates a decrease in ecosystem resistance. Along a gradient of gradual increase of environmental pressure (e.g., aridity), the connectivity-mediated feedbacks decrease the amount of pressure required to cause a critical shift to a degraded state (ecosystem resilience). If environmental conditions improve, these feedbacks increase the pressure release needed to achieve the ecosystem recovery (restoration potential). The impact of these feedbacks on dryland response to external stress is markedly non-linear, which relies on the non-linear negative relationship between bare-soil connectivity and vegetation cover. Modelling studies on dryland vegetation dynamics not accounting for the connectivity-mediated feedbacks studied here may overestimate the resistance, resilience and restoration potential of drylands in response to environmental and human pressures. Our results also suggest that changes in vegetation pattern and associated hydrological connectivity may be more informative early-warning indicators of dryland degradation than changes in vegetation cover.
Resumo:
This study examined whether the effectiveness of human resource management (HRM)practices is contingent on organizational climate and competitive strategy The concepts of internol and external fit suggest that the positive relationship between HRM and subsequent productivity will be stronger for firms with a positive organizational climate and for firms using differentiation strategies. Resource allocation theories of motivation, on the other hand, predict that the relationship between HRM and productivity will be stronger for firms with a poor climate because employees working in these firms should have the greatest amount of spare capacity. The results supported the resource allocation argument.
Resumo:
There is evidence that high-tillering, small-panicled pearl millet landraces are better adapted to the severe, unpredictable drought stress of the and zones of NW India than are low-tillering, large-panicled modern varieties, which significantly outyield the landraces under favourable conditions. In this paper, we analyse the relationship of and zone adaptation with the expression, under optimum conditions, of yield components that determine either the potential sink size or the ability to realise this potential. The objective is to test whether selection under optimal conditions for yield components can identify germplasm with adaptation to and zones in NW India, as this could potentially improve the efficiency of pearl millet improvement programs targeting and zones. We use data from an evaluation of over 100 landraces from NW India, conducted for two seasons under both severely drought-stressed and favourable conditions in northwest and south India. Trial average grain yields ranged from 14 g m(-2) to 182 g m(-2). The landraces were grouped into clusters, based on their phenology and yield components as measured under well-watered conditions in south India. In environments without pre-flowering drought stress, tillering type had no effect on potential sink size, but low-tillering, large-panicled landraces yielded significantly more grain, as they were better able to realise their potential sink size. By contrast, in two low-yielding and zone environments which experienced pre-anthesis drought stress, low-fillering, large-panicled landraces yielded significantly less grain than high-tillering ones with comparable phenology, because of both a reduced potential sink size and a reduced ability to realise this potential. The results indicate that the high grain yield of low-tillering, large-panicled landraces under favourable conditions is due to improved partitioning, rather than resource capture. However, under severe stress with restricted assimilate supply, high-tillering, small-panicled landraces are better able to produce a reproductive sink than are large-panicled ones. Selection under optimum conditions for yield components representing a resource allocation pattern favouring high yield under severe drought stress, combined with a capability to increase grain yield if assimilates are available, was more effective than direct selection for grain yield in identifying germplasm adapted to and zones. Incorporating such selection in early generations of variety testing could reduce the reliance on random stress environments. This should improve the efficiency of millet breeding programs targeting and zones. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This study examined whether the effectiveness of human resource management (HRM) practices is contingent on organizational climate and competitive strategy. The concepts of internal and external fit suggest that the positive relationship between HRM and subsequent productivity will be stronger for firms with a positive organizational climate and for firms using differentiation strategies. Resource allocation theories of motivation, on the other hand, predict that the relationship between HRM and productivity will be stronger for firms with a poor climate because employees working in these firms should have the greatest amount of spare capacity. The results supported the resource allocation argument. © 2005 Southern Management Association. All rights reserved.
Resumo:
In future massively distributed service-based computational systems, resources will span many locations, organisations and platforms. In such systems, the ability to allocate resources in a desired configuration, in a scalable and robust manner, will be essential.We build upon a previous evolutionary market-based approach to achieving resource allocation in decentralised systems, by considering heterogeneous providers. In such scenarios, providers may be said to value their resources differently. We demonstrate how, given such valuations, the outcome allocation may be predicted. Furthermore, we describe how the approach may be used to achieve a stable, uneven load-balance of our choosing. We analyse the system's expected behaviour, and validate our predictions in simulation. Our approach is fully decentralised; no part of the system is weaker than any other. No cooperation between nodes is assumed; only self-interest is relied upon. A particular desired allocation is achieved transparently to users, as no modification to the buyers is required.
Resumo:
Purpose – The purpose of this research is to study the perceived impact of some factors on the resources allocation processes of the Nigerian universities and to suggest a framework that will help practitioners and academics to understand and improve such processes. Design/methodology/approach – The study adopted the interpretive qualitative approach aimed at an ‘in-depth’ understanding of the resource allocation experiences of key university personnel and their perceived impact of the contextual factors affecting such processes. The analysis of individual narratives from each university established the conditions and factors impacting the resources allocation processes within each institution. Findings – The resources allocation process issues in the Nigerian universities may be categorised into people (core and peripheral units’ challenge, and politics and power); process (resources allocation processes); and resources (critical financial shortage and resources dependence response). The study also provides insight that resourcing efficiency in Nigerian universities appears strongly constrained by the rivalry among the resource managers. The efficient resources allocation process (ERAP) model is proposed to resolve the identified resourcing deficiencies. Research limitations/implications – The research is not focused to provide generalizable observations but ‘in-depth’ perceived factors and their impact on the resources allocation processes in Nigerian universities. The study is limited to the internal resources allocation issues within the universities and excludes the external funding factors. The resource managers’ responses to the identified factors may affect their internal resourcing efficiency. Further research using more empirical samples is required to obtain more widespread results and the implications for all universities. Originality/value – This study contributes a fresh literature framework to resources allocation processes focusing at ‘people’, ‘process’ and ‘resources’. Also a middle range theory triangulation is developed in relation to better understanding of resourcing process management. The study will be of interest to university managers and policy makers.