889 resultados para Axiomatic Models of Resource Allocation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and problem – As a result of financial crises and the realization of a broader stakeholder network, recent decades have seen an increase in stakeholder demand for non- financial information in corporate reporting. This has led to a situation of information overload where separate financial and sustainability reports have developed in length and complexity interdependent of each other. Integrated reporting has been presented as a solution to this problematic situation. The question is whether the corporate world believe this to be the solution and if the development of corporate reporting is heading in this direction. Purpose - This thesis aims to examine and assess to what extent companies listed on the OMX Stockholm 30 (OMXS30), as per 2016-02-28, comply with the Strategic content element of the <IR> Framework and how this disclosure has developed since the framework’s pilot project and official release by using a self-constructed disclosure index based on its specific items. Methodology – The purpose was fulfilled through an analysis of 104 annual reports comprising 26 companies during the period of 2011-2014. The annual reports were assessed using a self-constructed disclosure index based on the <IR> Framework content element Strategy and Resource Allocation, where one point was given for each disclosed item. Analysis and conclusions – The study found that the OMXS30-listed companies to a large extent complies with the strategic content element of the <IR> Framework and that this compliance has seen a steady growth throughout the researched time span. There is still room for improvement however with a total average framework compliance of 84% for 2014. Although many items are being reported on, there are indications that companies generally miss out on the core values of Integrated reporting. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Com o objetivo de comparar a satisfação das mulheres com a experiência do parto em três modelos assistenciais, foi realizada pesquisa descritiva, com abordagem quantitativa, em dois hospitais públicos de São Paulo, um promovendo o modelo "Típico" e o outro com um centro de parto intra-hospitalar (modelo "CPNIH") e um peri-hospitalar (modelo "CPNPH"). A amostra foi constituída por 90 puérperas, 30 de cada modelo. A comparação entre os resultados referentes à satisfação das mulheres com o atendimento prestado pelos profissionais de saúde, com a qualidade da assistência e os motivos de satisfação e insatisfação, com a indicação ou recomendação dos serviços recebidos, com a sensação de segurança no processo e com as sugestões de melhorias, mostrou que o modelo CPHPH foi o melhor avaliado, vindo em seguida o CPNIH e por último o Típico. Conclui-se que o modelo peri-hospitalar de assistência ao parto deveria receber maior apoio do SUS, por se constituir em serviço em que as mulheres se mostram satisfeitas com a atenção recebida

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exact composition of a specific class of compact stars, historically referred to as ""neutron stars,'' is still quite unknown. Possibilities ranging from hadronic to quark degrees of freedom, including self-bound versions of the latter, have been proposed. We specifically address the suitability of strange star models (including pairing interactions) in this work, in the light of new measurements available for four compact stars. The analysis shows that these data might be explained by such an exotic equation of state, actually selecting a small window in parameter space, but still new precise measurements and also further theoretical developments are needed to settle the subject.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a simple Maier-Saupe statistical model with the inclusion of disorder degrees of freedom to mimic the phase diagram of a mixture of rodlike and disklike molecules. A quenched distribution of shapes leads to a phase diagram with two uniaxial and a biaxial nematic structure. A thermalized distribution, however, which is more adequate to liquid mixtures, precludes the stability of this biaxial phase. We then use a two-temperature formalism, and assume a separation of relaxation times, to show that a partial degree of annealing is already sufficient to stabilize a biaxial nematic structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective:To investigate the effects of bilateral, surgically induced functional inhibition of the subthalamic nucleus (STN) on general language, high level linguistic abilities, and semantic processing skills in a group of patients with Parkinson’s disease. Methods:Comprehensive linguistic profiles were obtained up to one month before and three months after bilateral implantation of electrodes in the STN during active deep brain stimulation (DBS) in five subjects with Parkinson’s disease (mean age, 63.2 years). Equivalent linguistic profiles were generated over a three month period for a non-surgical control cohort of 16 subjects with Parkinson’s disease (NSPD) (mean age, 64.4 years). Education and disease duration were similar in the two groups. Initial assessment and three month follow up performance profiles were compared within subjects by paired t tests. Reliability change indices (RCI), representing clinically significant alterations in performance over time, were calculated for each of the assessment scores achieved by the five STN-DBS cases and the 16 NSPD controls, relative to performance variability within a group of 16 non-neurologically impaired adults (mean age, 61.9 years). Proportions of reliable change were then compared between the STN-DBS and NSPD groups. Results:Paired comparisons within the STN-DBS group showed prolonged postoperative semantic processing reaction times for a range of word types coded for meanings and meaning relatedness. Case by case analyses of reliable change across language assessments and groups revealed differences in proportions of change over time within the STN-DBS and NSPD groups in the domains of high level linguistics and semantic processing. Specifically, when compared with the NSPD group, the STN-DBS group showed a proportionally significant (p

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally the basal ganglia have been implicated in motor behavior, as they are involved in both the execution of automatic actions and the modification of ongoing actions in novel contexts. Corresponding to cognition, the role of the basal ganglia has not been defined as explicitly. Relative to linguistic processes, contemporary theories of subcortical participation in language have endorsed a role for the globus pallidus internus (GPi) in the control of lexical-semantic operations. However, attempts to empirically validate these postulates have been largely limited to neuropsychological investigations of verbal fluency abilities subsequent to pallidotomy. We evaluated the impact of bilateral posteroventral pallidotomy (BPVP) on language function across a range of general and high-level linguistic abilities, and validated/extended working theories of pallidal participation in language. Comprehensive linguistic profiles were compiled up to 1 month before and 3 months after BPVP in 6 subjects with Parkinson's disease (PD). Commensurate linguistic profiles were also gathered over a 3-month period for a nonsurgical control cohort of 16 subjects with PD and a group of 16 non-neurologically impaired controls (NC). Nonparametric between-groups comparisons were conducted and reliable change indices calculated, relative to baseline/3-month follow-up difference scores. Group-wise statistical comparisons between the three groups failed to reveal significant postoperative changes in language performance. Case-by-case data analysis relative to clinically consequential change indices revealed reliable alterations in performance across several language variables as a consequence of BPVP. These findings lend support to models of subcortical participation in language, which promote a role for the GPi in lexical-semantic manipulation mechanisms. Concomitant improvements and decrements in postoperative performance were interpreted within the context of additive and subtractive postlesional effects. Relative to parkinsonian cohorts, clinically reliable versus statistically significant changes on a case by case basis may provide the most accurate method of characterizing the way in which pathophysiologically divergent basal ganglia linguistic circuits respond to BPVP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper critically assesses several loss allocation methods based on the type of competition each method promotes. This understanding assists in determining which method will promote more efficient network operations when implemented in deregulated electricity industries. The methods addressed in this paper include the pro rata [1], proportional sharing [2], loss formula [3], incremental [4], and a new method proposed by the authors of this paper, which is loop-based [5]. These methods are tested on a modified Nordic 32-bus network, where different case studies of different operating points are investigated. The varying results obtained for each allocation method at different operating points make it possible to distinguish methods that promote unhealthy competition from those that encourage better system operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this review we demonstrate how the algebraic Bethe ansatz is used for the calculation of the-energy spectra and form factors (operator matrix elements in the basis of Hamiltonian eigenstates) in exactly solvable quantum systems. As examples we apply the theory to several models of current interest in the study of Bose-Einstein condensates, which have been successfully created using ultracold dilute atomic gases. The first model we introduce describes Josephson tunnelling between two coupled Bose-Einstein condensates. It can be used not only for the study of tunnelling between condensates of atomic gases, but for solid state Josephson junctions and coupled Cooper pair boxes. The theory is also applicable to models of atomic-molecular Bose-Einstein condensates, with two examples given and analysed. Additionally, these same two models are relevant to studies in quantum optics; Finally, we discuss the model of Bardeen, Cooper and Schrieffer in this framework, which is appropriate for systems of ultracold fermionic atomic gases, as well as being applicable for the description of superconducting correlations in metallic grains with nanoscale dimensions.; In applying all the above models to. physical situations, the need for an exact analysis of small-scale systems is established due to large quantum fluctuations which render mean-field approaches inaccurate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicted area under curve (AUC), mean transit time (MTT) and normalized variance (CV2) data have been compared for parent compound and generated metabolite following an impulse input into the liver, Models studied were the well-stirred (tank) model, tube model, a distributed tube model, dispersion model (Danckwerts and mixed boundary conditions) and tanks-in-series model. It is well known that discrimination between models for a parent solute is greatest when the parent solute is highly extracted by the liver. With the metabolite, greatest model differences for MTT and CV2 occur when parent solute is poorly extracted. In all cases the predictions of the distributed tube, dispersion, and tasks-in-series models are between the predictions of the rank and tube models. The dispersion model with mixed boundary conditions yields identical predictions to those for the distributed tube model (assuming an inverse gaussian distribution of tube transit times). The dispersion model with Danckwerts boundary conditions and the tanks-in series models give similar predictions to the dispersion (mixed boundary conditions) and the distributed tube. The normalized variance for parent compound is dependent upon hepatocyte permeability only within a distinct range of permeability values. This range is similar for each model but the order of magnitude predicted for normalized variance is model dependent. Only for a one-compartment system is the MIT for generated metabolite equal to the sum of MTTs for the parent compound and preformed metabolite administered as parent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two studies examined relations between groups (humanities and math-science students) that implicitly or explicitly share a common superordinate category (university student). In Experiment 1, 178 participants performed a noninteractive decision-making task during which category salience was manipulated in a 2 (superordinate category salience) x 2 (subordinate category salience) between-groups design. Consistent with the mutual intergroup differentiation model, participants for whom both categories were salient exhibited the lowest levels of bias, whereas bias was strongest when the superordinate category alone was made salient. This pattern of results was replicated in Experiment 2 (N = 135). In addition, Experiment 2 demonstrated that members of subgroups that are nested within a superordinate category are more sensitive to how the superordinate category is represented than are members of subgroups that extend beyond the boundaries of the superordinate category.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the internal dynamics of two cellular automaton models with heterogeneous strength fields and differing nearest neighbour laws. One model is a crack-like automaton, transferring ail stress from a rupture zone to the surroundings. The other automaton is a partial stress drop automaton, transferring only a fraction of the stress within a rupture zone to the surroundings. To study evolution of stress, the mean spectral density. f(k(r)) of a stress deficit held is: examined prior to, and immediately following ruptures in both models. Both models display a power-law relationship between f(k(r)) and spatial wavenumber (k(r)) of the form f(k(r)) similar tok(r)(-beta). In the crack model, the evolution of stress deficit is consistent with cyclic approach to, and retreat from a critical state in which large events occur. The approach to criticality is driven by tectonic loading. Short-range stress transfer in the model does not affect the approach to criticality of broad regions in the model. The evolution of stress deficit in the partial stress drop model is consistent with small fluctuations about a mean state of high stress, behaviour indicative of a self-organised critical system. Despite statistics similar to natural earthquakes these simplified models lack a physical basis. physically motivated models of earthquakes also display dynamical complexity similar to that of a critical point system. Studies of dynamical complexity in physical models of earthquakes may lead to advancement towards a physical theory for earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peptides that induce and recall T-cell responses are called T-cell epitopes. T-cell epitopes may be useful in a subunit vaccine against malaria. Computer models that simulate peptide binding to MHC are useful for selecting candidate T-cell epitopes since they minimize the number of experiments required for their identification. We applied a combination of computational and immunological strategies to select candidate T-cell epitopes. A total of 86 experimental binding assays were performed in three rounds of identification of HLA-All binding peptides from the six preerythrocytic malaria antigens. Thirty-six peptides were experimentally confirmed as binders. We show that the cyclical refinement of the ANN models results in a significant improvement of the efficiency of identifying potential T-cell epitopes. (C) 2001 by Elsevier Science Inc.