943 resultados para Chunk-based information diffusion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Switzerland there is a shortage of population-based information on heart failure (HF) incidence and case fatalities (CF). The aim of this study was to estimate HF event rates and both in- and out-of-hospital CF rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: In Switzerland there is a shortage of population-based information on stroke incidence and case fatalities (CF). The aim of this study was to estimate stroke event rates and both in- and out-of-hospital CF rates. METHODS: Data on stroke diagnoses, coded according to I60-I64 (ICD 10), were taken from the Federal Hospital Discharge Statistics database (HOST) and the Cause of Death database (CoD) for the year 2004. The number of total stroke events and of age- and gender-specific and agestandardised event rates were estimated; overall CF, in-hospital and out-of-hospital, were determined. RESULTS: Among the overall number of 13 996 hospital discharges from stroke (HOST) the number was lower in women (n = 6736) than in men (n = 7260). A total of 3568 deaths (2137 women and 1431 men) due to stroke were recorded in the CoD database. The number of estimated stroke events was 15 733, and higher in women (n = 7933) than in men (n = 7800). Men presented significantly higher age-specific stroke event rates and a higher age-standardised event rate (178.7/100 000 versus 119.7/100 000). Overall CF rates were significantly higher for women (26.9%) than for men (18.4%). The same was true of out-of-hospital CF but not of in-hospital CF rates. CONCLUSION: The data on estimated stroke events obtained indicate that stroke discharge rate underestimates the stroke event rate. Out-of-hospital deaths from stroke accounted for the largest proportion of total stroke deaths. Sex differences in both number of total stroke events and deaths could be explained by the higher proportion of women than men aged 55+ in the Swiss population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tobacco use is a major health hazard, and the onset of tobacco use occurs almost entirely in the teenage years. For this reason, schools are an ideal site for tobacco prevention programs. Although studies have shown that effective school-based tobacco prevention programs exist, all too frequently these programs are not used. In order for effective programs to achieve their potential impact, strategies for speeding the diffusion of these programs to school districts and seeing that, once adopted, programs are implemented as they are intended, must be developed and tested.^ This study (SC2) set out to replicate the findings of an earlier quasi-experimental study (The Smart Choices Diffusion Study, or SC1) in which strategies based on diffusion theory and social learning theory were found to be effective in encouraging adoption and implementation of an effective tobacco prevention program in schools. To increase awareness and encourage adoption, intervention strategies in both studies utilized opinion leaders, messages highlighting positive aspects of the program, and modeling of benefits and effective use through videotape and newsletters. To encourage accurate implementation of the curriculum, teacher training for the two studies utilized videotaped modeling and practice of activities by teachers. SC2 subjects were 38 school districts that make up one of Texas' 20 education service regions. These districts had served as the comparison group in SC1, and findings for the SC1 comparison and intervention groups were utilized as historic controls.^ SC2 achieved a 76.3% adoption rate and found that an average of 84% of the curriculum was taught with an 82% fidelity to methods utilized by the curriculum. These rates and rates for implementation of dissemination strategies were equal to or greater than corresponding rates for SC1. The proportion of teachers implementing the curriculum in SC2 was found to be equal to SC1's video-trained districts but lower than the SC1 workshop-trained group.^ SC2's findings corroborate and support the findings from the earlier study, and increase our confidence in its findings. Taken together, the findings from SC2 and SC1 point to the effectiveness of their theory-based intervention strategies in encouraging adoption and accurate implementation of the tobacco prevention curriculum. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational network analysis provides new methods to analyze the brain's structural organization based on diffusion imaging tractography data. Networks are characterized by global and local metrics that have recently given promising insights into diagnosis and the further understanding of psychiatric and neurologic disorders. Most of these metrics are based on the idea that information in a network flows along the shortest paths. In contrast to this notion, communicability is a broader measure of connectivity which assumes that information could flow along all possible paths between two nodes. In our work, the features of network metrics related to communicability were explored for the first time in the healthy structural brain network. In addition, the sensitivity of such metrics was analysed using simulated lesions to specific nodes and network connections. Results showed advantages of communicability over conventional metrics in detecting densely connected nodes as well as subsets of nodes vulnerable to lesions. In addition, communicability centrality was shown to be widely affected by the lesions and the changes were negatively correlated with the distance from lesion site. In summary, our analysis suggests that communicability metrics that may provide an insight into the integrative properties of the structural brain network and that these metrics may be useful for the analysis of brain networks in the presence of lesions. Nevertheless, the interpretation of communicability is not straightforward; hence these metrics should be used as a supplement to the more standard connectivity network metrics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer display height and desk design to allow forearm support are two critical design features of workstations for information technology tasks. However there is currently no 3D description of head and neck posture with different computer display heights and no direct comparison to paper based information technology tasks. There is also inconsistent evidence on the effect of forearm support on posture and no evidence on whether these features interact. This study compared the 3D head, neck and upper limb postures of 18 male and 18 female young adults whilst working with different display and desk design conditions. There was no substantial interaction between display height and desk design. Lower display heights increased head and neck flexion with more spinal asymmetry when working with paper. The curved desk, designed to provide forearm support, increased scapula elevation/protraction and shoulder flexion/abduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly users are seen as the weak link in the chain, when it comes to the security of corporate information. Should the users of computer systems act in any inappropriate or insecure manner, then they may put their employers in danger of financial losses, information degradation or litigation, and themselves in danger of dismissal or prosecution. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of inappropriate behaviours, and in so doing, protecting corporate information, is through the formulation and application of a formal ‘acceptable use policy (AUP). Whilst the AUP has attracted some academic interest, it has tended to be prescriptive and overly focussed on the role of the Internet, and there is relatively little empirical material that explicitly addresses the purpose, positioning or content of real acceptable use policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and composition of a sample of authentic policies – taken from the higher education sector – rather than simply making general prescriptions about what they ought to contain. There are two important conclusions to be drawn from this study: (1) the primary role of the AUP appears to be as a mechanism for dealing with unacceptable behaviour, rather than proactively promoting desirable and effective security behaviours, and (2) the wide variation found in the coverage and positioning of the reviewed policies is unlikely to be fostering a coherent approach to security management, across the higher education sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensuring the security of corporate information, that is increasingly stored, processed and disseminated using information and communications technologies [ICTs], has become an extremely complex and challenging activity. This is a particularly important concern for knowledge-intensive organisations, such as universities, as the effective conduct of their core teaching and research activities is becoming ever more reliant on the availability, integrity and accuracy of computer-based information resources. One increasingly important mechanism for reducing the occurrence of security breaches, and in so doing, protecting corporate information, is through the formulation and application of a formal information security policy (InSPy). Whilst a great deal has now been written about the importance and role of the information security policy, and approaches to its formulation and dissemination, there is relatively little empirical material that explicitly addresses the structure or content of security policies. The broad aim of the study, reported in this paper, is to fill this gap in the literature by critically examining the structure and content of authentic information security policies, rather than simply making general prescriptions about what they ought to contain. Having established the structure and key features of the reviewed policies, the paper critically explores the underlying conceptualisation of information security embedded in the policies. There are two important conclusions to be drawn from this study: (1) the wide diversity of disparate policies and standards in use is unlikely to foster a coherent approach to security management; and (2) the range of specific issues explicitly covered in university policies is surprisingly low, and reflects a highly techno-centric view of information security management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider a computer information system and a way to realize the security of the data in it with digital watermarking. A technique for spread spectrum watermarking is presented and its realization with MathLAB 6.5 is shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of social media tools such as Facebook and Twitter, mainstream media organizations including newspapers and TV media have played an active role in engaging with their audience and strengthening their influence on the recently emerged platforms. In this paper, we analyze the behavior of mainstream media on Twitter and study how they exert their influence to shape public opinion during the UK's 2010 General Election. We first propose an empirical measure to quantify mainstream media bias based on sentiment analysis and show that it correlates better with the actual political bias in the UK media than the pure quantitative measures based on media coverage of various political parties. We then compare the information diffusion patterns from different categories of sources. We found that while mainstream media is good at seeding prominent information cascades, its role in shaping public opinion is being challenged by journalists since tweets from them are more likely to be retweeted and they spread faster and have longer lifespan compared to tweets from mainstream media. Moreover, the political bias of the journalists is a good indicator of the actual election results. Copyright 2013 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trade credit is an important source of finance for SMEs and this study investigates the use of the financial statements and other information in making trade credit decisions in smaller entities in Finland, the UK, USA and South Africa. The study adds to the literature by examining the information needs of unincorporated entities as a basis for making comparisons with small, unlisted companies. In-depth, semi-structured interviews in each country were used to collect data from the owner-managers of SMEs and from credit rating agencies and credit insurers. The findings provide insights into similarities and differences between countries and between developed and developing economies. The evidence suggests that there are three main influences on the trade credit decision: formal and report-based information, soft information relating to social capital and contingency factors. The latter dictate the extent to which hard/formal information versus soft/informal information is used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The severe class distribution shews the presence of underrepresented data, which has great effects on the performance of learning algorithm, is still a challenge of data mining and machine learning. Lots of researches currently focus on experimental comparison of the existing re-sampling approaches. We believe it requires new ways of constructing better algorithms to further balance and analyse the data set. This paper presents a Fuzzy-based Information Decomposition oversampling (FIDoS) algorithm used for handling the imbalanced data. Generally speaking, this is a new way of addressing imbalanced learning problems from missing data perspective. First, we assume that there are missing instances in the minority class that result in the imbalanced dataset. Then the proposed algorithm which takes advantages of fuzzy membership function is used to transfer information to the missing minority class instances. Finally, the experimental results demonstrate that the proposed algorithm is more practical and applicable compared to sampling techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the notion of the cumulative time varying graph (C-TVG) is proposed to model the high dynamics and relationships between ordered static graph sequences for space-based information networks (SBINs). In order to improve the performance of management and control of the SBIN, the complexity and social properties of the SBIN's high dynamic topology during a period of time is investigated based on the proposed C-TVG. Moreover, a cumulative topology generation algorithm is designed to establish the topology evolution of the SBIN, which supports the C-TVG based complexity analysis and reduces network congestions and collisions resulting from traditional link establishment mechanisms between satellites. Simulations test the social properties of the SBIN cumulative topology generated through the proposed C-TVG algorithm. Results indicate that through the C-TVG based analysis, more complexity properties of the SBIN can be revealed than the topology analysis without time cumulation. In addition, the application of attack on the SBIN is simulated, and results indicate the validity and effectiveness of the proposed C-TVG and C-TVG based complexity analysis for the SBIN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.