933 resultados para Return-based pricing kernel


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose – Describes a new breed of HR strategies that encourage employee involvement and commitment as part of high-performance working (HPW). Design/methodology/approach – Focuses on managing employee attitudes and skills through careful attention to leadership, reward and job-design policies. Highlights the differences between people's formal employment contracts and their less formal “psychological contracts”, and emphasizes the importance of the latter. Provides a case study of UK recruitment consultancy Angel Services Group Ltd, which allows staff who meet their daily targets to go home an hour early. Findings – Urges companies to have processes in place to understand the needs of individual employees. This can be done through leadership policies that require all supervisors and managers not only to manage their staff but also to know them as people. Practical implications – Emphasizes that organizations need to see HPW initiatives as part of the normal way of managing people, and not as “flavour of the month”. Originality/value – Outlines a wide range of initiatives that could help organizations to gain their employees' commitment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background - Modelling the interaction between potentially antigenic peptides and Major Histocompatibility Complex (MHC) molecules is a key step in identifying potential T-cell epitopes. For Class II MHC alleles, the binding groove is open at both ends, causing ambiguity in the positional alignment between the groove and peptide, as well as creating uncertainty as to what parts of the peptide interact with the MHC. Moreover, the antigenic peptides have variable lengths, making naive modelling methods difficult to apply. This paper introduces a kernel method that can handle variable length peptides effectively by quantifying similarities between peptide sequences and integrating these into the kernel. Results - The kernel approach presented here shows increased prediction accuracy with a significantly higher number of true positives and negatives on multiple MHC class II alleles, when testing data sets from MHCPEP [1], MCHBN [2], and MHCBench [3]. Evaluation by cross validation, when segregating binders and non-binders, produced an average of 0.824 AROC for the MHCBench data sets (up from 0.756), and an average of 0.96 AROC for multiple alleles of the MHCPEP database. Conclusion - The method improves performance over existing state-of-the-art methods of MHC class II peptide binding predictions by using a custom, knowledge-based representation of peptides. Similarity scores, in contrast to a fixed-length, pocket-specific representation of amino acids, provide a flexible and powerful way of modelling MHC binding, and can easily be applied to other dynamic sequence problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Smart grid technologies have given rise to a liberalised and decentralised electricity market, enabling energy providers and retailers to have a better understanding of the demand side and its response to pricing signals. This paper puts forward a reinforcement-learning-powered tool aiding an electricity retailer to define the tariff prices it offers, in a bid to optimise its retail strategy. In a competitive market, an energy retailer aims to simultaneously increase the number of contracted customers and its profit margin. We have abstracted the problem of deciding on a tariff price as faced by a retailer, as a semi-Markov decision problem (SMDP). A hierarchical reinforcement learning approach, MaxQ value function decomposition, is applied to solve the SMDP through interactions with the market. To evaluate our trading strategy, we developed a retailer agent (termed AstonTAC) that uses the proposed SMDP framework to act in an open multi-agent simulation environment, the Power Trading Agent Competition (Power TAC). An evaluation and analysis of the 2013 Power TAC finals show that AstonTAC successfully selects sell prices that attract as many customers as necessary to maximise the profit margin. Moreover, during the competition, AstonTAC was the only retailer agent performing well across all retail market settings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In many product categories, unit prices facilitate price comparisons across brands and package sizes; this enables consumers to identify those products that provide the greatest value. However in other product categories, unit prices may be confusing. This is because there are two types of unit pricing, measure-based and usage-based. Measure-based unit prices are what the name implies; price is expressed in cents or dollars per unit of measure (e.g. ounce). Usage-based unit prices, on the other hand, are expressed in terms of cents or dollars per use (e.g., wash load or serving). The results of this study show that in two different product categories (i.e., laundry detergent and dry breakfast cereal), measure-based unit prices reduced consumers’ ability to identify higher value products, but when a usage-based unit price was provided, their ability to identify product value was increased. When provided with both a measure-based and a usage-based unit price, respondents did not perform as well as when they were provided only a usage-based unit price, additional evidence that the measure-based unit price hindered consumers’ comparisons. Finally, the presence of two potential moderators, education about the meaning of the two measures and having to rank order the options in the choice set in terms of value before choosing, did not eliminate these effects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The financial crisis of 2007-2008 led to extraordinary government intervention in firms and markets. The scope and depth of government action rivaled that of the Great Depression. Many traded markets experienced dramatic declines in liquidity leading to the existence of conditions normally assumed to be promptly removed via the actions of profit seeking arbitrageurs. These extreme events motivate the three essays in this work. The first essay seeks and fails to find evidence of investor behavior consistent with the broad 'Too Big To Fail' policies enacted during the crisis by government agents. Only in limited circumstances, where government guarantees such as deposit insurance or U.S. Treasury lending lines already existed, did investors impart a premium to the debt security prices of firms under stress. The second essay introduces the Inflation Indexed Swap Basis (IIS Basis) in examining the large differences between cash and derivative markets based upon future U.S. inflation as measured by the Consumer Price Index (CPI). It reports the consistent positive value of this measure as well as the very large positive values it reached in the fourth quarter of 2008 after Lehman Brothers went bankrupt. It concludes that the IIS Basis continues to exist due to limitations in market liquidity and hedging alternatives. The third essay explores the methodology of performing debt based event studies utilizing credit default swaps (CDS). It provides practical implementation advice to researchers to address limited source data and/or small target firm sample size.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Market-oriented reverse auction is an efficient and cost-effective method for resource allocation in cloud workflow systems since it can dynamically allocate resources depending on the supply-demand relationship of the cloud market. However, during the auction the price of cloud resource is usually fixed, and the current resource allocation mechanisms cannot adapt to the changeable market properly which results in the low efficiency of resource utilization. To address such a problem, a dynamic pricing reverse auction-based resource allocation mechanism is proposed. During the auction, resource providers can change prices according to the trading situation so that our novel mechanism can increase the chances of making a deal and improve efficiency of resource utilization. In addition, resource providers can improve their competitiveness in the market by lowering prices, and thus users can obtain cheaper resources in shorter time which would decrease monetary cost and completion time for workflow execution. Experiments with different situations and problem sizes are conducted for dynamic pricing-based allocation mechanism (DPAM) on resource utilization and the measurement of Time∗Cost (TC). The results show that our DPAM can outperform its representative in resource utilization, monetary cost, and completion time and also obtain the optimal price reduction rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a deductive theoretical enquiry into the flow of effects from the geometry of price bubbles/busts, to price indices, to pricing behaviours of sellers and buyers, and back to price bubbles/busts. The intent of the analysis is to suggest analytical approaches to identify the presence, maturity, and/or sustainability of a price bubble. We present a pricing model to emulate market behaviour, including numeric examples and charts of the interaction of supply and demand. The model extends into dynamic market solutions myopic (single- and multi-period) backward looking rational expectations to demonstrate how buyers and sellers interact to affect supply and demand and to show how capital gain expectations can be a destabilising influence – i.e. the lagged effects of past price gains can drive the market price away from long-run market-worth. Investing based on the outputs of past price-based valuation models appear to be more of a game-of-chance than a sound investment strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the paper’s subtitle suggests broadband has had a remarkably checkered trajectory in Australia. It was synonymous with the early 1990s information superhighway and seemed to presage a moment in which “content is [to be] king”. It disappeared almost entirely as a public priority in the mid to late 1990s as intrastructure and content were disconnected in services frameworks focused on information and communication technologies. And it came back in the 2000s as a critical infrastructure for innovation and the knowledge economy. But this time content was not king but rather an intermediate input at the service of innovating industries and processes. Broadband was a critical infrastructure for the digitally-based creative industries. Today the quality of the broadband infrastructure in Australia—itself an outcome of these different policy frameworks—is identified as “fraudband” holding back business, creativity and consumer uptake. In this paper I use the checkered trajectory of broadband on Australian political and policy horizons as a stepping off point to reflect on the ideas governing these changing governmental and public settings. This history enables me to explore how content and infrastructure are simultaneously connected and disconnected in our thinking. And, finally, I want to make some remarks about the way communication, particularly media communication, has been marginally positioned after being, initially so apparently central.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper will investigate the suitability of existing performance measures under the assumption of a clearly defined benchmark. A range of measures are examined including the Sortino Ratio, the Sharpe Selection ratio (SSR), the Student’s t-test and a decay rate measure. A simulation study is used to assess the power and bias of these measures based on variations in sample size and mean performance of two simulated funds. The Sortino Ratio is found to be the superior performance measure exhibiting more power and less bias than the SSR when the distribution of excess returns are skewed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recently proposed data-driven background dataset refinement technique provides a means of selecting an informative background for support vector machine (SVM)-based speaker verification systems. This paper investigates the characteristics of the impostor examples in such highly-informative background datasets. Data-driven dataset refinement individually evaluates the suitability of candidate impostor examples for the SVM background prior to selecting the highest-ranking examples as a refined background dataset. Further, the characteristics of the refined dataset were analysed to investigate the desired traits of an informative SVM background. The most informative examples of the refined dataset were found to consist of large amounts of active speech and distinctive language characteristics. The data-driven refinement technique was shown to filter the set of candidate impostor examples to produce a more disperse representation of the impostor population in the SVM kernel space, thereby reducing the number of redundant and less-informative examples in the background dataset. Furthermore, data-driven refinement was shown to provide performance gains when applied to the difficult task of refining a small candidate dataset that was mis-matched to the evaluation conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The changing ownership of roles in organisational work-life leads this paper to examine what universities are doing in their academic development practice through research at an Australian university where ‘artful’ collaboration with the real world aims to build capability for innovative academic community engagement. The paper also presents findings on the ‘return on expectations’ (Hodges, 2004) of community engagement for both academics and their organisational supervisors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an extended study on the implementation of support vector machine(SVM) based speaker verification in systems that employ continuous progressive model adaptation using the weight-based factor analysis model. The weight-based factor analysis model compensates for session variations in unsupervised scenarios by incorporating trial confidence measures in the general statistics used in the inter-session variability modelling process. Employing weight-based factor analysis in Gaussian mixture models (GMM) was recently found to provide significant performance gains to unsupervised classification. Further improvements in performance were found through the integration of SVM-based classification in the system by means of GMM supervectors. This study focuses particularly on the way in which a client is represented in the SVM kernel space using single and multiple target supervectors. Experimental results indicate that training client SVMs using a single target supervector maximises performance while exhibiting a certain robustness to the inclusion of impostor training data in the model. Furthermore, the inclusion of low-scoring target trials in the adaptation process is investigated where they were found to significantly aid performance.