985 resultados para resource consumption
Resumo:
Recent research shows that because they rely on separate goals, cognitions about not performing a behaviour are not simple opposites of cognitions about performing the same behaviour. Using this perspective, two studies (N = 758 & N = 104) examined the psycho-social determinants of reduction in resource consumption. Results showed that goals associated with reducing versus not reducing resource consumption were not simple opposites (Study 1). Additionally, the discriminant validity of the Theory of Planned Behaviour constructs associated with reducing versus not reducing resource consumption was demonstrated (Study 1 & 2). Moreover, results revealed the incremental validity of both Intentions (to reduce and to not reduce resource consumption) for predicting a series of behaviours (Study 1 & 2). Finally, results indicated a mediation role for the importance of ecological dimensions on the effect of both Intentions on a mock TV choice and a mediation role for the importance of non ecological dimensions on the effect of Intention of not reducing on the same TV choice. Discussion is organized around the consequences, at both theoretical and applied levels, of considering separate motivational systems for reducing and not reducing resource consumption.
Resumo:
In the cerebral cortex, the small volume of the extracellular space in relation to the volume enclosed by synapses suggests an important functional role for this relationship. It is well known that there are atoms and molecules in the extracellular space that are absolutely necessary for synapses to function (e.g., calcium). I propose here the hypothesis that the rapid shift of these atoms and molecules from extracellular to intrasynaptic compartments represents the consumption of a shared, limited resource available to local volumes of neural tissue. Such consumption results in a dramatic competition among synapses for resources necessary for their function. In this paper, I explore a theory in which this resource consumption plays a critical role in the way local volumes of neural tissue operate. On short time scales, this principle of resource consumption permits a tissue volume to choose those synapses that function in a particular context and thereby helps to integrate the many neural signals that impinge on a tissue volume at any given moment. On longer time scales, the same principle aids in the stable storage and recall of information. The theory provides one framework for understanding how cerebral cortical tissue volumes integrate, attend to, store, and recall information. In this account, the capacity of neural tissue to attend to stimuli is intimately tied to the way tissue volumes are organized at fine spatial scales.
Resumo:
Recycling, substitution and product life extension are identified as significant factors contributing to an extension of the time to exhaustion of industrially Dnportant materials. A quantitative assessment of the significance of virtually all materials to the U.K. is made. Copper is identified as one of the most important materials deserving of further investigation into potential resource savings through increased recycling. The other factors listed above are accounted for in the modelling technique employed. United Kingdom copper flows are qualitatively and statistically described for the years 1949 - 1976. Less accurate statistics are developed for 1922 - 1948. Adaptive expectations type causal models of total, unalloyed, and alloyed copper demand are successfully constructed and are used to generate future scenarios. Evidence is demonstrated for a break in the historical link between U.K. copper demand and industrial production. Simple causal models of potential copper scrap supply are constructed and a comparison made with actual old scrap withdrawals. Accurate adaptive expectations type models of total scrap demand are developed, but no conclusion is reached about the price elasticity of scrap demand. Various scenarios of copper goods demand are forecast and their effect on copper scrap demand. The potential to recover up to an extra 100.000 tonnes/year of generally lower grade old scrap is identified. Policy options are examined and the following recommendations made: 1) A total investment of up to £67 million in secondary refining capacity by the year 2000 is needed. 2) The copper scrap content of copper bearing goods should be specified to aid recovery. 3) A U.K. copper scrap buffer stock scheme would be advantageous for the secondary copper industry. Finally the methodology used is summarised for potential application to other materials.
Resumo:
Sustainable resource use is one of the most important environmental issues of our times. It is closely related to discussions on the 'peaking' of various natural resources serving as energy sources, agricultural nutrients, or metals indispensable in high-technology applications. Although the peaking theory remains controversial, it is commonly recognized that a more sustainable use of resources would alleviate negative environmental impacts related to resource use. In this thesis, sustainable resource use is analysed from a practical standpoint, through several different case studies. Four of these case studies relate to resource metabolism in the Canton of Geneva in Switzerland: the aim was to model the evolution of chosen resource stocks and flows in the coming decades. The studied resources were copper (a bulk metal), phosphorus (a vital agricultural nutrient), and wood (a renewable resource). In addition, the case of lithium (a critical metal) was analysed briefly in a qualitative manner and in an electric mobility perspective. In addition to the Geneva case studies, this thesis includes a case study on the sustainability of space life support systems. Space life support systems are systems whose aim is to provide the crew of a spacecraft with the necessary metabolic consumables over the course of a mission. Sustainability was again analysed from a resource use perspective. In this case study, the functioning of two different types of life support systems, ARES and BIORAT, were evaluated and compared; these systems represent, respectively, physico-chemical and biological life support systems. Space life support systems could in fact be used as a kind of 'laboratory of sustainability' given that they represent closed and relatively simple systems compared to complex and open terrestrial systems such as the Canton of Geneva. The chosen analysis method used in the Geneva case studies was dynamic material flow analysis: dynamic material flow models were constructed for the resources copper, phosphorus, and wood. Besides a baseline scenario, various alternative scenarios (notably involving increased recycling) were also examined. In the case of space life support systems, the methodology of material flow analysis was also employed, but as the data available on the dynamic behaviour of the systems was insufficient, only static simulations could be performed. The results of the case studies in the Canton of Geneva show the following: were resource use to follow population growth, resource consumption would be multiplied by nearly 1.2 by 2030 and by 1.5 by 2080. A complete transition to electric mobility would be expected to only slightly (+5%) increase the copper consumption per capita while the lithium demand in cars would increase 350 fold. For example, phosphorus imports could be decreased by recycling sewage sludge or human urine; however, the health and environmental impacts of these options have yet to be studied. Increasing the wood production in the Canton would not significantly decrease the dependence on wood imports as the Canton's production represents only 5% of total consumption. In the comparison of space life support systems ARES and BIORAT, BIORAT outperforms ARES in resource use but not in energy use. However, as the systems are dimensioned very differently, it remains questionable whether they can be compared outright. In conclusion, the use of dynamic material flow analysis can provide useful information for policy makers and strategic decision-making; however, uncertainty in reference data greatly influences the precision of the results. Space life support systems constitute an extreme case of resource-using systems; nevertheless, it is not clear how their example could be of immediate use to terrestrial systems.
Resumo:
Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on pa- rameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Objective. This research study had two goals: (1) to describe resource consumption patterns for Medi-Cal children with cystic fibrosis, and (2) to explore the feasibility from a rate design perspective of developing specialized managed care plans for such a special needs population.^ Background. Children with special health care needs (CSHN) comprise about 2% of the California Medicaid pediatric population. CSHN have rare but serious health problems, such as cystic fibrosis. Medicaid programs, including Medi-Cal, are enrolling more and more beneficiaries in managed care to control costs. CSHN, however, do not fit the wellness model underlying most managed care plans. Child health advocates believe that both efficiency and quality will suffer if CSHN are removed from regionalized special care centers and scattered among general purpose plans. They believe that CSHN should be "carved out" from enrollment in general plans. One alternative is the Specialized Managed Care Plan, tailored for CSHN.^ Methods. The study population consisted of children under age 21 with CF who were eligible for Medi-Cal and California Children's Services program (CCS) during 1991. Health Care Financing Administration (HCFA) Medicaid Tape-to-Tape data were analyzed as part of a California Children's Hospital Association (CCHA) project.^ Results. Mean Medi-Cal expenditures per month enrolled were $2,302 for 457 CF children, compared to about \$1,270 for all 47,000 CCS special needs children and roughly $60 for almost 2.6 million ``regular needs'' children. For CF children, inpatient care (80\%) and outpatient drugs (9\%) were the major cost drivers, with {\it all\/} outpatient visits comprising only 2\% of expenditures. About one-third of CF children were eligible due to AFDC (Aid to Families with Dependent Children). Age group explained about 17\% of all expenditure variation. Regression analysis was used to select the best capitation rate structure (rate cells by age and eligibility group). Sensitivity analysis estimated moderate financial risk for a statewide plan (360 enrollees), but severe risk for single county implementation due to small numbers of children.^ Conclusions. Study results support the carve out of CSHN due to unique expenditure patterns. The Specialized Managed Care Plan concept appears feasible from a rate design perspective given sufficient enrollees. ^
Resumo:
We present a static analysis that infers both upper and lower bounds on the usage that a logic program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A resource in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower-bound) resource usage functions for all predicates in the program. We also present an assertion language which is used to define both such resources and resourcerelated properties that the system can then check based on the results of the analysis. We have performed some preliminary experiments with some concrete resources such as execution steps, bytes sent or received by an application, number of files left open, number of accesses to a datábase, number of calis to a procedure, number of asserts/retracts, etc. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.
Resumo:
We present a generic analysis that infers both upper and lower bounds on the usage that a program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A resource in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower- bound) cost functions for all procedures in the program. We also present an assertion language which is used to define both such resources and resource-related properties that the system can then check based on the results of the analysis. We have performed some experiments with some concrete resource-related properties such as execution steps, bits sent or received by an application, number of arithmetic operations performed, number of calls to a procedure, number of transactions, etc. presenting the resource usage functions inferred and the times taken to perform the analysis. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.
Resumo:
With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD