933 resultados para Return-based pricing kernel


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Aim: To investigate participation in a second round of colorectal cancer screening using a fecal occult blood test (FOBT) in an Australian rural community, and to assess the demographic characteristics and individual perspectives associated with repeat screening. ---------- Methods: Potential participants from round 1 (50–74 years of age) were sent an intervention package and asked to return a completed FOBT (n = 3406). Doctors of participants testing positive referred to colonoscopy as appropriate. Following screening, 119 participants completed qualitative telephone interviews. Multivariable logistic regression models evaluated the association between round-2 participation and other variables.---------- Results: Round-2 participation was 34.7%; the strongest predictor was participation in round 1. Repeat participants were more likely to be female; inconsistent screeners were more likely to be younger (aged 50–59 years). The proportion of positive FOBT was 12.7%, that of colonoscopy compliance was 98.6%, and the positive predictive value for cancer or adenoma of advanced pathology was 23.9%. Reasons for participation included testing as a precautionary measure or having family history/friends with colorectal cancer; reasons for non-participation included apathy or doctors’ advice against screening.---------- Conclusion: Participation was relatively low and consistent across rounds. Unless suitable strategies are identified to overcome behavioral trends and/or to screen out ineligible participants, little change in overall participation rates can be expected across rounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper derives from research-in-progress intending both Design Research (DR) and Design Science (DS) outputs; the former a management decision tool based in IS-Impact (Gable et al. 2008) kernel theory; the latter being methodological learnings deriving from synthesis of the literature and reflection on the DR ‘case study’ experience. The paper introduces a generic, detailed and pragmatic DS ‘Research Roadmap’ or methodology, deriving at this stage primarily from synthesis and harmonization of relevant concepts identified through systematic archival analysis of related literature. The scope of the Roadmap too has been influenced by the parallel study aim to undertake DR applying and further evolving the Roadmap. The Roadmap is presented in attention to the dearth of detailed guidance available to novice Researchers in Design Science Research (DSR), and though preliminary, is expected to evolve and gradually be substantiated through experience of its application. A key distinction of the Roadmap from other DSR methods is its breadth of coverage of published DSR concepts and activities; its detail and scope. It represents a useful synthesis and integration of otherwise highly disparate DSR-related concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper critically analyses the proposed Australian regulatory approach to the crediting of biological sequestration activities (biosequestration) under the Australian Carbon Farming Initiative and its interaction with State-based carbon rights, the national carbon-pricing mechanism, and the international Kyoto Protocol and carbon-trading markets. Norms and principles have been established by the Kyoto Protocol to guide the creation of additional, verifiable, and permanent credits from biosequestration activities. This paper examines the proposed arrangements under the Australian Carbon Farming Initiative and Carbon Pricing Mechanism to determine whether they are consistent with those international norms and standards. This paper identifies a number of anomalies associated with the legal treatment of additionality and permanence and issuance of carbon credits within the Australian schemes. In light of this, the paper considers the possible legal implications for the national and international transfer, surrender and use of these offset credits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discovering proper search intents is a vi- tal process to return desired results. It is constantly a hot research topic regarding information retrieval in recent years. Existing methods are mainly limited by utilizing context-based mining, query expansion, and user profiling techniques, which are still suffering from the issue of ambiguity in search queries. In this pa- per, we introduce a novel ontology-based approach in terms of a world knowledge base in order to construct personalized ontologies for identifying adequate con- cept levels for matching user search intents. An iter- ative mining algorithm is designed for evaluating po- tential intents level by level until meeting the best re- sult. The propose-to-attempt approach is evaluated in a large volume RCV1 data set, and experimental results indicate a distinct improvement on top precision after compared with baseline models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: There are inequalities in geographical access and delivery of health care services in Australia, particularly for cardiovascular disease (CVD), Australia's major cause of death. Analyses and models that can inform and positively influence strategies to augment services and preventative measures are needed. The Cardiac-ARIA project is using geographical spatial technology (GIS) to develop a national index for each of Australia's 13,000 population centres. The index will describe the spatial distribution of CVD health care services available to support populations at risk, in a timely manner, after a major cardiac event. Methods: In the initial phase of the project, an expert panel of cardiologists and an emergency physician have identified key elements of national and international guidelines for management of acute coronary syndromes, cardiac arrest, life-threatening arrhythmias and acute heart failure, from the time of onset (potentially dial 000) to return from the hospital to the community (cardiac rehabilitation). Results: A systematic search has been undertaken to identify the geographical location of, and type of, cardiac services currently available. This has enabled derivation of a master dataset of necessary services, e.g. telephone networks, ambulance, RFDS, helicopter retrieval services, road networks, hospitals, general practitioners, medical community centres, pathology services, CCUs, catheterisation laboratories, cardio-thoracic surgery units and cardiac rehabilitation services. Conclusion: This unique and innovative project has the potential to deliver a powerful tool to both highlight and combat the burden of disease of CVD in urban and regional Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates an experimental study that examines the accuracy of various information retrieval techniques for Web service discovery. The main goal of this research is to evaluate algorithms for semantic web service discovery. The evaluation is comprehensively benchmarked using more than 1,700 real-world WSDL documents from INEX 2010 Web Service Discovery Track dataset. For automatic search, we successfully use Latent Semantic Analysis and BM25 to perform Web service discovery. Moreover, we provide linking analysis which automatically links possible atomic Web services to meet the complex requirements of users. Our fusion engine recommends a final result to users. Our experiments show that linking analysis can improve the overall performance of Web service discovery. We also find that keyword-based search can quickly return results but it has limitation of understanding users’ goals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying ice or other forms of topical cooling is a popular method of treating sports injuries. It is commonplace for athletes to return to competitive activity, shortly or immediately after the application of a cold treatment. In this article, we examine the effect of local tissue cooling on outcomes relating to functional performance and to discuss their relevance to the sporting environment. A computerized literature search, citation tracking and hand search was performed up to April, 2011. Eligible studies were trials involving healthy human participants, describing the effects of cooling on outcomes relating to functional performance. Two reviewers independently assessed the validity of included trials and calculated effect sizes. Thirty five trials met the inclusion criteria; all had a high risk of bias. The mean sample size was 19. Meta-analyses were not undertaken due to clinical heterogeneity. The majority of studies used cooling durations >20 minutes. Strength (peak torque/force) was reported by 25 studies with approximately 75% recording a decrease in strength immediately following cooling. There was evidence from six studies that cooling adversely affected speed, power and agility-based running tasks; two studies found this was negated with a short rewarming period. There was conflicting evidence on the effect of cooling on isolated muscular endurance. A small number of studies found that cooling decreased upper limb dexterity and accuracy. The current evidence base suggests that athletes will probably be at a performance disadvantage if they return to activity immediately after cooling. This is based on cooling for longer than 20 minutes, which may exceed the durations employed in some sporting environments. In addition, some of the reported changes were clinically small and may only be relevant in elite sport. Until better evidence is available, practitioners should use short cooling applications and/or undertake a progressive warm up prior to returning to play.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dhaka, the capital of Bangladesh, is facing severe traffic congestion. Owing to the flaws in past land use and transport planning decisions, uncontrolled population growth and urbanization, Dhaka’s traffic condition is worsening. Road space is widely regarded in the literature as a utility, so a common view of transport economists is that its usage ought to be charged. Road pricing policy has proven to be effective in managing travel demand, in order to reduce traffic congestion from road networks in a number of cities including London, Stockholm and Singapore. Road pricing as an economic mechanism to manage travel demand can be more effective and user-friendly when revenue is hypothecated into supply alternatives such as improvements to the transit system. This research investigates the feasibility of adopting road pricing in Dhaka with respect to a significant Bus Rapid Transit (BRT) project. Because both are very new concepts for the population of Dhaka, public acceptability would be a principal issue driving their success or failure. This paper explores the travel behaviour of workers in Dhaka and public perception toward Road Pricing with regards to work trips- based on worker’s travel behaviour. A revealed preference and stated preference survey has been conducted on sample of workers in Dhaka. They were asked limited demographic questions, their current travel behaviour and at the end they had been given several hypothetical choices of integrated BRT and road pricing to choose from. Key finding from the survey is the objective of integrated road pricing; subsidies Bus rapid Transit by road pricing to get reduced BRT fare; cannot be achieved in Dhaka. This is because most of the respondent stated that they would choose the cheapest option Walk-BRT-Walk, even though this would be more time consuming and uncomfortable as they have to walk from home to BRT station and also from BRT station to home. Proper economic analysis has to be carried out to find out the appropriate fare of BRT and road charge with some incentive for the low income people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inquiries to return predictability are traditionally limited to conditional mean, while literature on portfolio selection is replete with moment-based analysis with up to the fourth moment being considered. This paper develops a distribution-based framework for both return prediction and portfolio selection. More specifically, a time-varying return distribution is modeled through quantile regressions and copulas, using quantile regressions to extract information in marginal distributions and copulas to capture dependence structure. A preference function which captures higher moments is proposed for portfolio selection. An empirical application highlights the additional information provided by the distributional approach which cannot be captured by the traditional moment-based methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling video sequences by subspaces has recently shown promise for recognising human actions. Subspaces are able to accommodate the effects of various image variations and can capture the dynamic properties of actions. Subspaces form a non-Euclidean and curved Riemannian manifold known as a Grassmann manifold. Inference on manifold spaces usually is achieved by embedding the manifolds in higher dimensional Euclidean spaces. In this paper, we instead propose to embed the Grassmann manifolds into reproducing kernel Hilbert spaces and then tackle the problem of discriminant analysis on such manifolds. To achieve efficient machinery, we propose graph-based local discriminant analysis that utilises within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, respectively. Experiments on KTH, UCF Sports, and Ballet datasets show that the proposed approach obtains marked improvements in discrimination accuracy in comparison to several state-of-the-art methods, such as the kernel version of affine hull image-set distance, tensor canonical correlation analysis, spatial-temporal words and hierarchy of discriminative space-time neighbourhood features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our daily lives become more and more dependent upon smartphones due to their increased capabilities. Smartphones are used in various ways from payment systems to assisting the lives of elderly or disabled people. Security threats for these devices become increasingly dangerous since there is still a lack of proper security tools for protection. Android emerges as an open smartphone platform which allows modification even on operating system level. Therefore, third-party developers have the opportunity to develop kernel-based low-level security tools which is not normal for smartphone platforms. Android quickly gained its popularity among smartphone developers and even beyond since it bases on Java on top of "open" Linux in comparison to former proprietary platforms which have very restrictive SDKs and corresponding APIs. Symbian OS for example, holding the greatest market share among all smartphone OSs, was closing critical APIs to common developers and introduced application certification. This was done since this OS was the main target for smartphone malwares in the past. In fact, more than 290 malwares designed for Symbian OS appeared from July 2004 to July 2008. Android, in turn, promises to be completely open source. Together with the Linux-based smartphone OS OpenMoko, open smartphone platforms may attract malware writers for creating malicious applications endangering the critical smartphone applications and owners� privacy. In this work, we present our current results in analyzing the security of Android smartphones with a focus on its Linux side. Our results are not limited to Android, they are also applicable to Linux-based smartphones such as OpenMoko Neo FreeRunner. Our contribution in this work is three-fold. First, we analyze android framework and the Linux-kernel to check security functionalities. We survey wellaccepted security mechanisms and tools which can increase device security. We provide descriptions on how to adopt these security tools on Android kernel, and provide their overhead analysis in terms of resource usage. As open smartphones are released and may increase their market share similar to Symbian, they may attract attention of malware writers. Therefore, our second contribution focuses on malware detection techniques at the kernel level. We test applicability of existing signature and intrusion detection methods in Android environment. We focus on monitoring events on the kernel; that is, identifying critical kernel, log file, file system and network activity events, and devising efficient mechanisms to monitor them in a resource limited environment. Our third contribution involves initial results of our malware detection mechanism basing on static function call analysis. We identified approximately 105 Executable and Linking Format (ELF) executables installed to the Linux side of Android. We perform a statistical analysis on the function calls used by these applications. The results of the analysis can be compared to newly installed applications for detecting significant differences. Additionally, certain function calls indicate malicious activity. Therefore, we present a simple decision tree for deciding the suspiciousness of the corresponding application. Our results present a first step towards detecting malicious applications on Android-based devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Entity-oriented retrieval aims to return a list of relevant entities rather than documents to provide exact answers for user queries. The nature of entity-oriented retrieval requires identifying the semantic intent of user queries, i.e., understanding the semantic role of query terms and determining the semantic categories which indicate the class of target entities. Existing methods are not able to exploit the semantic intent by capturing the semantic relationship between terms in a query and in a document that contains entity related information. To improve the understanding of the semantic intent of user queries, we propose concept-based retrieval method that not only automatically identifies the semantic intent of user queries, i.e., Intent Type and Intent Modifier but introduces concepts represented by Wikipedia articles to user queries. We evaluate our proposed method on entity profile documents annotated by concepts from Wikipedia category and list structure. Empirical analysis reveals that the proposed method outperforms several state-of-the-art approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project was a step forward in developing intrusion detection systems in distributed environments such as web services. It investigates a new approach of detection based on so-called "taint-marking" techniques and introduces a theoretical framework along with its implementation in the Linux kernel.