984 resultados para dynamic decay adjustment
Resumo:
The balance between player competence and the challenge presented by a task has been acknowledged as a major factor in providing optimal experience in video games. While Dynamic Difficulty Adjustment (DDA) presents methods for adjusting difficulty in real-time during singleplayer games, little research has explored its application in competitive multiplayer games where challenge is dictated by the competence of human opponents. By conducting a formal review of 180 existing competitive multiplayer games, it was found that a large number of modern games are utilizing DDA techniques to balance challenge between human opponents. From this data, we propose a preliminary framework for classifying Multiplayer Dynamic Difficulty Adjustment (mDDA) instances.
Resumo:
Multiplayer Dynamic Difficulty Adjustment (mDDA) is a method of reducing the difference in player performance and subsequent challenge in competitive multiplayer video games. As a balance of between player skill and challenge experienced is necessary for optimal player experience, this experimental study investigates the effects of mDDA and awareness of its presence on player performance and experience using subjective and biometric measures. Early analysis indicates that mDDA normalizes performance and challenge as expected, but awareness of its presence can reduce its effectiveness.
Resumo:
Monoallelic expression in diploid mammalian cells appears to be a widespread phenomenon, with the most studied examples being X-chromosome inactivation in eutherian female cells and genomic imprinting in the mouse and human. Silencing and methylation of certain sites on one of the two alleles in somatic cells is specific with respect to parental source for imprinted genes and random for X-linked genes. We report here evidence indicating that: (i) differential methylation patterns of imprinted genes are not simply copied from the gametes, but rather established gradually after fertilization; (ii) very similar methylation patterns are observed for diploid, tetraploid, parthenogenic, and androgenic preimplantation mouse embryos, as well as parthenogenic and androgenic mouse embryonic stem cells; (iii) haploid parthenogenic embryos do not show methylation adjustment as seen in diploid or tetraploid embryos, but rather retain the maternal pattern. These observations suggest that differential methylation in imprinted genes is achieved by a dynamic process that senses gene dosage and adjusts methylation similar to X-chromosome inactivation.
Resumo:
This paper presents an investigation into dynamic self-adjustment of task deployment and other aspects of self-management, through the embedding of multiple policies. Non-dedicated loosely-coupled computing environments, such as clusters and grids are increasingly popular platforms for parallel processing. These abundant systems are highly dynamic environments in which many sources of variability affect the run-time efficiency of tasks. The dynamism is exacerbated by the incorporation of mobile devices and wireless communication. This paper proposes an adaptive strategy for the flexible run-time deployment of tasks; to continuously maintain efficiency despite the environmental variability. The strategy centres on policy-based scheduling which is informed by contextual and environmental inputs such as variance in the round-trip communication time between a client and its workers and the effective processing performance of each worker. A self-management framework has been implemented for evaluation purposes. The framework integrates several policy-controlled, adaptive services with the application code, enabling the run-time behaviour to be adapted to contextual and environmental conditions. Using this framework, an exemplar self-managing parallel application is implemented and used to investigate the extent of the benefits of the strategy
Resumo:
Static timing analysis provides the basis for setting the clock period of a microprocessor core, based on its worst-case critical path. However, depending on the design, this critical path is not always excited and therefore dynamic timing margins exist that can theoretically be exploited for the benefit of better speed or lower power consumption (through voltage scaling). This paper introduces predictive instruction-based dynamic clock adjustment as a technique to trim dynamic timing margins in pipelined microprocessors. To this end, we exploit the different timing requirements for individual instructions during the dynamically varying program execution flow without the need for complex circuit-level measures to detect and correct timing violations. We provide a design flow to extract the dynamic timing information for the design using post-layout dynamic timing analysis and we integrate the results into a custom cycle-accurate simulator. This simulator allows annotation of individual instructions with their impact on timing (in each pipeline stage) and rapidly derives the overall code execution time for complex benchmarks. The design methodology is illustrated at the microarchitecture level, demonstrating the performance and power gains possible on a 6-stage OpenRISC in-order general purpose processor core in a 28nm CMOS technology. We show that employing instruction-dependent dynamic clock adjustment leads on average to an increase in operating speed by 38% or to a reduction in power consumption by 24%, compared to traditional synchronous clocking, which at all times has to respect the worst-case timing identified through static timing analysis.
Resumo:
Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.
Resumo:
CONCLUSIONS The focus of this work was the investigation ofanomalies in Tg and dynamics at polymer surfaces. Thethermally induced decay of hot-embossed polymer gratings isstudied using laser-diffraction and atomic force microscopy(AFM). Monodisperse PMMA and PS are selected in the Mwranges of 4.2 to 65.0 kg/mol and 3.47 to 65.0 kg/mol,respectively. Two different modes of measurement were used:the one mode uses temperature ramps to obtain an estimate ofthe near-surface glass temperature, Tdec,0; the other modeinvestigates the dynamics at a constant temperature aboveTg. The temperature-ramp experiments reveal Tdec,0 valuesvery close to the Tg,bulk values, as determined bydifferential scanning calorimetry (DSC). The PMMA of65.0 kg/mol shows a decreased value of Tg, while the PS samples of 3.47 and 10.3 kg/mol (Mw
Resumo:
Fingerprints are used for identification in forensics and are classified into Manual and Automatic. Automatic fingerprint identification system is classified into Latent and Exemplar. A novel Exemplar technique of Fingerprint Image Verification using Dictionary Learning (FIVDL) is proposed to improve the performance of low quality fingerprints, where Dictionary learning method reduces the time complexity by using block processing instead of pixel processing. The dynamic range of an image is adjusted by using Successive Mean Quantization Transform (SMQT) technique and the frequency domain noise is reduced using spectral frequency Histogram Equalization. Then, an adaptive nonlinear dynamic range adjustment technique is utilized to determine the local spectral features on corresponding fingerprint ridge frequency and orientation. The dictionary is constructed using spatial fundamental frequency that is determined from the spectral features. These dictionaries help in removing the spurious noise present in fingerprints and reduce the time complexity by using block processing instead of pixel processing. Further, dictionaries are used to reconstruct the image for matching. The proposed FIVDL is verified on FVC database sets and Experimental result shows an improvement over the state-of-the-art techniques. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.
Resumo:
A simple yet efficient harmony search (HS) method with a new pitch adjustment rule (NPAHS) is proposed for dynamic economic dispatch (DED) of electrical power systems, a large-scale non-linear real time optimization problem imposed by a number of complex constraints. The new pitch adjustment rule is based on the perturbation information and the mean value of the harmony memory, which is simple to implement and helps to enhance solution quality and convergence speed. A new constraint handling technique is also developed to effectively handle various constraints in the DED problem, and the violation of ramp rate limits between the first and last scheduling intervals that is often ignored by existing approaches for DED problems is effectively eliminated. To validate the effectiveness, the NPAHS is first tested on 10 popular benchmark functions with 100 dimensions, in comparison with four HS variants and five state-of-the-art evolutionary algorithms. Then, NPAHS is used to solve three 24-h DED systems with 5, 15 and 54 units, which consider the valve point effects, transmission loss, emission and prohibited operating zones. Simulation results on all these systems show the scalability and superiority of the proposed NPAHS on various large scale problems.
Resumo:
Fine carbonaceous aerosols (CAs) is the key factor influencing the currently filthy air in megacities in China, yet few studies simultaneously focus on the origins of different CAs species using specific and powerful source tracers. Here, we present a detailed source apportionment for various CAs fractions, including organic carbon (OC), water-soluble OC (WSOC), water-insoluble OC (WIOC), elemental carbon (EC) and secondary OC (SOC) in the largest cities of North (Beijing, BJ) and South China (Guangzhou, GZ), using the measurements of radiocarbon and anhydrosugars. Results show that non-fossil fuel sources such as biomass burning and biogenic emission make a significant contribution to the total CAs in Chinese megacities: 56±4 in BJ and 46±5% in GZ, respectively. The relative contributions of primary fossil carbon from coal and liquid petroleum combustions, primary non-fossil carbon and secondary organic carbon (SOC) to total carbon are 19, 28 and 54% in BJ, and 40, 15 and 46% in GZ, respectively. Non-fossil fuel sources account for 52 in BJ and 71% in GZ of SOC, respectively. These results suggest that biomass burning has a greater influence on regional particulate air pollution in North China than in South China. We observed an unabridged haze bloom-decay process in South China, which illustrates that both primary and secondary matter from fossil sources played a key role in the blooming phase of the pollution episode, while haze phase is predominantly driven by fossil-derived secondary organic matter and nitrate.
Resumo:
In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.
Resumo:
The aim of this paper is to explore a new approach to obtain better traffic demand (Origin-Destination, OD matrices) for dense urban networks. From reviewing existing methods, from static to dynamic OD matrix evaluation, possible deficiencies in the approach could be identified: traffic assignment details for complex urban network and lacks in dynamic approach. To improve the global process of traffic demand estimation, this paper is focussing on a new methodology to determine dynamic OD matrices for urban areas characterized by complex route choice situation and high level of traffic controls. An iterative bi-level approach will be used, the Lower level (traffic assignment) problem will determine, dynamically, the utilisation of the network by vehicles using heuristic data from mesoscopic traffic simulator and the Upper level (matrix adjustment) problem will proceed to an OD estimation using optimization Kalman filtering technique. In this way, a full dynamic and continuous estimation of the final OD matrix could be obtained. First results of the proposed approach and remarks are presented.
Resumo:
For a multiarmed bandit problem with exponential discounting the optimal allocation rule is defined by a dynamic allocation index defined for each arm on its space. The index for an arm is equal to the expected immediate reward from the arm, with an upward adjustment reflecting any uncertainty about the prospects of obtaining rewards from the arm, and the possibilities of resolving those uncertainties by selecting that arm. Thus the learning component of the index is defined to be the difference between the index and the expected immediate reward. For two arms with the same expected immediate reward the learning component should be larger for the arm for which the reward rate is more uncertain. This is shown to be true for arms based on independent samples from a fixed distribution with an unknown parameter in the cases of Bernoulli and normal distributions, and similar results are obtained in other cases.
Resumo:
We predict the dynamic light scattering intensity S(q,t) for the L3 phase (anomalous isotropic phase) of dilute surfactant solutions. Our results are based on a Landau-Ginzburg approach, which was previously used to explain the observed static structure factor S(q, 0). In the extreme limit of small q, we find a monoexponential decay with marginal or irrelevant hydrodynamic interactions. In most other regimes the decay of S(q,t) is strongly nonexponential; in one case, it is purely algebraic at long times.