45 resultados para Reliable Computations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we study a reliable downloading algorithm for BitTorrent-like systems, and attest it in mathematics. BitTorrent-like systems have become immensely popular peer-to-peer file distribution tools in the internet in recent years. We analyze them in theory and point out some of their limitations especially in reliability, and propose an algorithm to resolve these problems by using the redundant copies in neighbors in P2P networks and can further optimize the downloading speed in some condition. Our preliminary simulations show that the proposed reliable algorithm works well; the improved BitTorrent-like systems are very stable and reliable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A critical question in data mining is that can we always trust what discovered by a data mining system unconditionally? The answer is obviously not. If not, when can we trust the discovery then? What are the factors that affect the reliability of the discovery? How do they affect the reliability of the discovery? These are some interesting questions to be investigated. In this chapter we will firstly provide a definition and the measurements of reliability, and analyse the factors that affect the reliability. We then examine the impact of model complexity, weak links, varying sample sizes and the ability of different learners to the reliability of graphical model discovery. The experimental results reveal that (1) the larger sample size for the discovery, the higher reliability we will get; (2) the stronger a graph link is, the easier the discovery will be and thus the higher the reliability it can achieve; (3) the complexity of a graph also plays an important role in the discovery. The higher the complexity of a graph is, the more difficult to induce the graph and the lower reliability it would be. We also examined the performance difference of different discovery algorithms. This reveals the impact of discovery process. The experimental results show the superior reliability and robustness of MML method to standard significance tests in the recovery of graph links with small samples and weak links.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regression lies heart in statistics, it is the one of the most important branch of multivariate techniques available for extracting knowledge in almost every field of study and research. Nowadays, it has drawn a huge interest to perform the tasks with different fields like machine learning, pattern recognition and data mining. Investigating outlier (exceptional) is a century long problem to the data analyst and researchers. Blind application of data could have dangerous consequences and leading to discovery of meaningless patterns and carrying to the imperfect knowledge. As a result of digital revolution and the growth of the Internet and Intranet data continues to be accumulated at an exponential rate and thereby importance of detecting outliers and study their costs and benefits as a tool for reliable knowledge discovery claims perfect attention. Investigating outliers in regression has been paid great value for the last few decades within two frames of thoughts in the name of robust regression and regression diagnostics. Robust regression first wants to fit a regression to the majority of the data and then to discover outliers as those points that possess large residuals from the robust output whereas in regression diagnostics one first finds the outliers, delete/correct them and then fit the regular data by classical (usual) methods. At the beginning there seems to be much confusion but now the researchers reach to the consensus, robustness and diagnostics are two complementary approaches to the analysis of data and any one is not good enough. In this chapter, we discuss both of them under the unique spectrum of regression diagnostics. Chapter expresses the necessity and views of regression diagnostics as well as presents several contemporary methods through numerical examples in linear regression within each aforesaid category together with current challenges and possible future research directions. Our aim is to make the chapter self-explained maintaining its general accessibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present improved algorithms for automatic fade and dissolve detection in digital video analysis. We devise new two-step algorithms for fade and dissolve detection and introduce a method for eliminating false positives from a list of detected candidate transitions. In our detailed study of these gradual shot transitions, our objective has been to accurately classify the type of transitions (fade-in, fade-out, and dissolve) and to precisely locate the boundary of the transitions. This distinguishes our work from early work in scene change detection which focuses on identifying the existence of a transition rather than its precise temporal extent. We evaluate our algorithms against two other commonly used methods on a comprehensive data set, and demonstrate the improved performance due to our enhancements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The web is a rich resource for information discovery, as a result web mining is a hot topic. However, a reliable mining result depends on the reliability of the data set. For every single second, the web generate huge amount of data, such as web page requests, file transportation. The data reflect human behavior in the cyber space and therefore valuable for our analysis in various disciplines, e.g. social science, network security. How to deposit the data is a challenge. An usual strategy is to save the abstract of the data, such as using aggregation functions to preserve the features of the original data with much smaller space. A key problem, however is that such information can be distorted by the presence of illegitimate traffic, e.g. botnet recruitment scanning, DDoS attack traffic, etc. An important consideration in web related knowledge discovery then is the robustness of the aggregation method , which in turn may be affected by the reliability of network traffic data. In this chapter, we first present the methods of aggregation functions, and then we employe information distances to filter out anomaly data as a preparation for web data mining.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multicast is an important mechanism in modern wireless networks and has attracted significant efforts to improve its performance with different metrics including throughput, delay, energy efficiency, etc. Traditionally, an ideal loss-free channel model is widely used to facilitate routing protocol design. However, the quality of wireless links would be affected or even jeopardized by many factors like collisions, fading or the noise of environment. In this paper, we propose a reliable multicast protocol, called CodePipe, with advanced performance in terms of energy-efficiency, throughput and fairness in lossy wireless networks. Built upon opportunistic routing and random linear network coding, CodePipe not only simplifies transmission coordination between nodes, but also improves the multicast throughput significantly by exploiting both intra-batch and inter-batch coding opportunities. In particular, four key techniques, namely, LP-based opportunistic routing structure, opportunistic feeding, fast batch moving and inter-batch coding, are proposed to offer substantial improvement in throughput, energy-efficiency and fairness. We evaluate CodePipe on ns2 simulator by comparing with other two state-of-art multicast protocols, MORE and Pacifier. Simulation results show that CodePipe significantly outperforms both of them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a Minimal Causal Model Inducer that can be used for the reliable knowledge discovery. The minimal-model semantics of causal discovery is an essential concept for the identification of a best fitting model in the sense of satisfactory consistent with the given data and be the simpler, less expressive model. Consistency is one of major measures of reliability in knowledge discovery. Therefore to develop an algorithm being able to derive a minimal model is an interesting topic in the are of reliable knowledge discovery. various causal induction algorithms and tools developed so far can not guarantee that the derived model is minimal and consistent. It was proved the MML induction approach introduced by Wallace, Keven and Honghua Dai is a minimal causal model learner. In this paper, we further prove that the developed minimal causal model learner is reliable in the sense of satisfactory consistency. The experimental results obtained from the tests on a number of both artificial and real models provided in this paper confirm this theoretical result.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A reliable forecasting for future construction costs or prices would help to ensure the budget of a construction project can be well planned and limited resources can be allocated more appropriately in construction firms. Although many studies have been focused on the construction price modelling and forecasting, few researchers have considered the impacts of the global economic events and seasonality in price modelling and forecasting. In this study, an advanced multivariate modelling technique, namely the vector correction (VEC) model with dummy variables was employed and the impacts of the global economic event and seasonality were factored into the forecasting model for the building construction price in the Australian construction market. Research findings suggest that a long-run equilibrium relationship exists among the price, levels of supply and demand in the construction market. The reliability of forecasting models was examined by mean absolute percentage error (MAPE) and The Theil's inequality coefficient U tests. The results of MAPE and U tests suggest that the conventional VEC model and the VEC model with dummy variable are both acceptable for forecasting building construction prices, while the VEC model that considered external impacts achieves higher prediction accuracy than the conventional VEC model does.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identifying gene signatures that are associatedwith the estrogen receptor based breast cancer samples is achallenging problem that has significant implications in breastcancer diagnosis and treatment. Various existing approaches foridentifying gene signatures have been developed but are not ableto achieve the satisfactory results because of their severallimitations. Subnetwork-based approaches have shown to be arobust classification method that uses interaction datasets suchas protein-protein interaction datasets. It has been reported thatthese interaction datasets contain many irrelevant interactionsthat have no biological meaning associated with them, and thusit is essential to filter out those interactions which can improvethe classification results. In this paper, we therefore, proposed ahub-based reliable gene expression algorithm (HRGE) thateffectively extracts the significant biologically-relevantinteractions and uses hub-gene topology to generate thesubnetwork based gene signatures for ER+ and ER- breastcancer subtypes. The proposed approach shows the superiorclassification accuracy amongst the other existing classifiers, inthe validation dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background : The effects of the built environment on walking in seniors have not been studied in an Asian context. To examine these effects, valid and reliable measures are needed. The aim of this study was to develop and validate a questionnaire of perceived neighborhood characteristics related to walking appropriate for Chinese seniors (Neighborhood Environment Walkability Scale for Chinese Seniors, NEWS-CS). It was based on the Neighborhood Environment Walkability Scale - Abbreviated (NEWS-A), a validated measure of perceived built environment developed in the USA for adults. A secondary study aim was to establish the generalizability of the NEWS-A to an Asian high-density urban context and a different age group.

Methods : A multidisciplinary panel of experts adapted the original NEWS-A to reflect the built environment of Hong Kong and needs of seniors. The translated instrument was pre-tested on a sample of 50 Chinese-speaking senior residents (65+ years). The final version of the NEWS-CS was interviewer-administered to 484 seniors residing in four selected Hong Kong districts varying in walkability and socio-economic status. Ninety-two participants completed the questionnaire on two separate occasions, 2-3 weeks apart. Test-rest reliability indices were estimated for each item and subscale of the NEWS-CS. Confirmatory factor analysis was used to develop the measurement model of the NEWS-CS and cross-validate that of the NEWS-A.

Results : The final version of the NEWS-CS consisted of 14 subscales and four single items (76 items). Test-retest reliability was moderate to good (ICC > 50 or % agreement > 60) except for four items measuring distance to destinations. The originally-proposed measurement models of the NEWS-A and NEWS-CS required 2-3 theoretically-justifiable modifications to fit the data well.

Conclusions : The NEWS-CS possesses sufficient levels of reliability and factorial validity to be used for measuring perceived neighborhood environment in Chinese seniors. Further work is needed to assess its construct validity and generalizability to other Asian locations. In general, the measurement model of the original NEWS-A was generalizable to this study context, supporting the feasibility of cross-country and age-group comparisons of the effect of the neighborhood environment on walking using the NEWS-A as a tool to measure the perceived built environment.