805 resultados para Collaborative Filtering
Resumo:
BACKGROUND: Published work assessing psychosocial stress (job strain) as a risk factor for coronary heart disease is inconsistent and subject to publication bias and reverse causation bias. We analysed the relation between job strain and coronary heart disease with a meta-analysis of published and unpublished studies. METHODS: We used individual records from 13 European cohort studies (1985-2006) of men and women without coronary heart disease who were employed at time of baseline assessment. We measured job strain with questions from validated job-content and demand-control questionnaires. We extracted data in two stages such that acquisition and harmonisation of job strain measure and covariables occurred before linkage to records for coronary heart disease. We defined incident coronary heart disease as the first non-fatal myocardial infarction or coronary death. FINDINGS: 30?214 (15%) of 197?473 participants reported job strain. In 1·49 million person-years at risk (mean follow-up 7·5 years [SD 1·7]), we recorded 2358 events of incident coronary heart disease. After adjustment for sex and age, the hazard ratio for job strain versus no job strain was 1·23 (95% CI 1·10-1·37). This effect estimate was higher in published (1·43, 1·15-1·77) than unpublished (1·16, 1·02-1·32) studies. Hazard ratios were likewise raised in analyses addressing reverse causality by exclusion of events of coronary heart disease that occurred in the first 3 years (1·31, 1·15-1·48) and 5 years (1·30, 1·13-1·50) of follow-up. We noted an association between job strain and coronary heart disease for sex, age groups, socioeconomic strata, and region, and after adjustments for socioeconomic status, and lifestyle and conventional risk factors. The population attributable risk for job strain was 3·4%. INTERPRETATION: Our findings suggest that prevention of workplace stress might decrease disease incidence; however, this strategy would have a much smaller effect than would tackling of standard risk factors, such as smoking. FUNDING: Finnish Work Environment Fund, the Academy of Finland, the Swedish Research Council for Working Life and Social Research, the German Social Accident Insurance, the Danish National Research Centre for the Working Environment, the BUPA Foundation, the Ministry of Social Affairs and Employment, the Medical Research Council, the Wellcome Trust, and the US National Institutes of Health.
Resumo:
N-gram analysis is an approach that investigates the structure of a program using bytes, characters, or text strings. A key issue with N-gram analysis is feature selection amidst the explosion of features that occurs when N is increased. The experiments within this paper represent programs as operational code (opcode) density histograms gained through dynamic analysis. A support vector machine is used to create a reference model, which is used to evaluate two methods of feature reduction, which are 'area of intersect' and 'subspace analysis using eigenvectors.' The findings show that the relationships between features are complex and simple statistics filtering approaches do not provide a viable approach. However, eigenvector subspace analysis produces a suitable filter.
Resumo:
A novel bit level systolic array is presented that can be used as a building block in the construction of recursive digital filters. The circuit accepts bit-parallel input data, is pipelined at the bit level, and exhibits a very high throughput rate. The most important feature of the circuit is that it allows recursive operations to be implemented directly without incurring the large m cycle latency (where m is approximately the word length) normally associated with such systems. The use of this circuit in the construction of both first- and second-order IIR (infinite-impulse-response) filters is described.
Resumo:
Recently, a number of most significant digit (msd) first bit parallel multipliers for recursive filtering have been reported. However, the design approach which has been used has, in general, been heuristic and consequently, optimality has not always been assured. In this paper, msd first multiply accumulate algorithms are described and important relationships governing the dependencies between latency, number representations, etc are derived. A more systematic approach to designing recursive filters is illustrated by applying the algorithms and associated relationships to the design of cascadable modules for high sample rate IIR filtering and wave digital filtering.
Resumo:
The application of fine grain pipelining techniques in the design of high performance Wave Digital Filters (WDFs) is described. It is shown that significant increases in the sampling rate of bit parallel circuits can be achieved using most significant bit (msb) first arithmetic. A novel VLSI architecture for implementing two-port adaptor circuits is described which embodies these ideas. The circuit in question is highly regular, uses msb first arithmetic and is implemented using simple carry-save adders. © 1992 Kluwer Academic Publishers.
Resumo:
Several novel systolic architectures for implementing densely pipelined bit parallel IIR filter sections are presented. The fundamental problem of latency in the feedback loop is overcome by employing redundant arithmetic in combination with bit-level feedback, allowing a basic first-order section to achieve a wordlength-independent latency of only two clock cycles. This is extended to produce a building block from which higher order sections can be constructed. The architecture is then refined by combining the use of both conventional and redundant arithmetic, resulting in two new structures offering substantial hardware savings over the original design. In contrast to alternative techniques, bit-level pipelinability is achieved with no net cost in hardware. © 1989 Kluwer Academic Publishers.
Resumo:
The application of fine-grain pipelining techniques in the design of high-performance wave digital filters (WDFs) is described. The problems of latency in feedback loops can be significantly reduced if computations are organized most significant, as opposed to least significant, bit first and if the results are fed back as soon as they are formed. The result is that chips can be designed which offer significantly higher sampling rates than otherwise can be obtained using conventional methods. How these concepts can be extended to the more challenging problem of WDFs is discussed. It is shown that significant increases in the sampling rate of bit-parallel circuits can be achieved using most significant bit first arithmetic.
Resumo:
A novel bit-level systolic array architecture for implementing IIR (infinite-impulse response) filter sections is presented. A first-order section achieves a latency of only two clock cycles by using a radix-2 redundant number representation, performing the recursive computation most significant digit first, and feeding back each digit of the result as soon as it is available. The design is extended to produce a building block from which second- and higher-order sections can be connected.
Resumo:
Optimized circuits for implementing high-performance bit-parallel IIR filters are presented. Circuits constructed mainly from simple carry save adders and based on most-significant-bit (MSB) first arithmetic are described. Two methods resulting in systems which are 100% efficient in that they are capable of sampling data every cycle are presented. In the first approach the basic circuit is modified so that the level of pipelining used is compatible with the small, but fixed, latency associated with the computation in question. This is achieved through insertion of pipeline delays (half latches) on every second row of cells. This produces an area-efficient solution in which the throughput rate is determined by a critical path of 76 gate delays. A second approach combines the MSB first arithmetic methods with the scattered look-ahead methods. Important design issues are addressed, including wordlength truncation, overflow detection, and saturation.
Resumo:
In this article, we focus on the analysis of competitive gene set methods for detecting the statistical significance of pathways from gene expression data. Our main result is to demonstrate that some of the most frequently used gene set methods, GSEA, GSEArot and GAGE, are severely influenced by the filtering of the data in a way that such an analysis is no longer reconcilable with the principles of statistical inference, rendering the obtained results in the worst case inexpressive. A possible consequence of this is that these methods can increase their power by the addition of unrelated data and noise. Our results are obtained within a bootstrapping framework that allows a rigorous assessment of the robustness of results and enables power estimates. Our results indicate that when using competitive gene set methods, it is imperative to apply a stringent gene filtering criterion. However, even when genes are filtered appropriately, for gene expression data from chips that do not provide a genome-scale coverage of the expression values of all mRNAs, this is not enough for GSEA, GSEArot and GAGE to ensure the statistical soundness of the applied procedure. For this reason, for biomedical and clinical studies, we strongly advice not to use GSEA, GSEArot and GAGE for such data sets.
Resumo:
Health services research has emerged as a tool for decision makers to make services more effective and efficient. While its value as a basis for decision making is well established, the incorporation of such evidence into decision making remains inconsistent. To this end, strengthening collaborative relationships between researchers and healthcare decision makers has been identified as a significant strategy for putting research evidence into practice.
Resumo:
The health care field is a new arena for collaborative research carried out by practitioner-researcher teams. Although the current literature discusses factors supportive of such teams, most evidence is anecdotal or descriptive of pilot projects. In this article, the authors use survey and interview data to document health care practitioners' views on collaborative research with an experienced researcher/ mentor. Topics covered include a description of the research project and process, positive and negative aspects of doing research, expectations, recommendations to colleagues starting research, and desirable characteristics in practitioners and researchers on collaborative research teams. Of all attributes mentioned, personal traits and skills were among the most frequently mentioned for both practitioners and researchers, followed by research knowledge and attitudes for practitioners, and teaching skills for researchers. The article also addresses factors important to the success of collaborative research: how to develop a project, characteristics of collaborative team members, team functioning, and institutional support. Copyright © 2000 Taylor & Francis.
Resumo:
Purpose: The purpose of this study was to describe the outcome of patients with filtering blebs who were fit with contact lenses. Methods: We retrospectively studied patients with filtering blebs secondary to glaucoma or cataract surgery who were fit with contact lenses. Eight eyes from seven patients were identified. Results: Five patients (six eyes) were fit with gas permeable contact lenses and two patients (two eyes) were fit with soft contact lenses. Successful fits were achieved in all patients. No complications were observed after a mean follow-up of 64.6±28.5 months. Conclusions: No significant complications were recorded in our series of patients with filtering blebs who were fit with contact lenses. We think that when indications for fitting contact lenses are justified, patients with filtering blebs are acceptable candidates for contact lens use. However, adequate selection of cases, careful contact lens fitting, patient education, and close follow-up are necessary.