29 resultados para Resource-based theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Social movement theories offer useful conceptual and analytical tools to the study and research of global media reform movements. This article is a critical analysis of the Communication Rights in the Information Society (CRIS) campaign. It explores its successes and blind-spots in the light of social movement theory, in particular resource mobilization theory (RMT), and offers practical directions for the movement to move on from where it is to where it ought to be.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

How useful is a quantum dynamical operation for quantum information processing? Motivated by this question, we investigate several strength measures quantifying the resources intrinsic to a quantum operation. We develop a general theory of such strength measures, based on axiomatic considerations independent of state-based resources. The power of this theory is demonstrated with applications to quantum communication complexity, quantum computational complexity, and entanglement generation by unitary operations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Adsorption of nitrogen, argon, methane, and carbon dioxide on activated carbon Norit R1 over a wide range of pressure (up to 50 MPa) at temperatures from 298 to 343 K (supercritical conditions) is analyzed by means of the density functional theory modified by incorporating the Bender equation of state, which describes the bulk phase properties with very high accuracy. It has allowed us to precisely describe the experimental data of carbon dioxide adsorption slightly above and below its critical temperatures. The pore size distribution (PSD) obtained with supercritical gases at ambient temperatures compares reasonably well with the PSD obtained with subcritical nitrogen at 77 K. Our approach does not require the skeletal density of activated carbon from helium adsorption measurements to calculate excess adsorption. Instead, this density is treated as a fitting parameter, and in all cases its values are found to fall into a very narrow range close to 2000 kg/m(3). It was shown that in the case of high-pressure adsorption of supercritical gases the PSD could be reliably obtained for the range of pore width between 0.6 and 3 run. All wider pores can be reliably characterized only in terms of surface area as their corresponding excess local isotherms are the same over a practical range of pressure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The numerical solution of stochastic differential equations (SDEs) has been focussed recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the best choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The coach is central to the development of expertise in sport (Bloom, 1985) and is subsequently key to facilitating adaptive forms of motivation to enhance the quality of sport performance (Mallett & Hanrahan, 2004). In designing optimal training environments that are sensitive to the underlying motives of athletes, the coach requires an in-depth understanding of motivation. This paper reports on the application of self-determination theory (SDT; Deci & Ryan, 1985; Ryan & Deci, 2000) to coaching elite athletes. Specifically, the application of SDT to designing an autonomy-supportive motivational climate is outlined, which was used in preparing Australia's two men's relay teams for the 2004 Olympic Games in Athens.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD