978 resultados para Meta-design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beach and coastal dune systems are increasingly subjected to a broad range of anthropogenic pressures that on many shorelines require significant conservation and mitigation interventions. But these interventions require reliable data on the severity and frequency of adverse ecological impacts. Such evidence is often obtained by measuring the response of 'indicator species'.Ghost crabs are the largest invertebrates inhabiting tropical and subtropical sandy shores and are frequently used to assess human impacts on ocean beaches. Here we present the first global meta-analysis of these impacts, and analyse the design properties and metrics of studies using ghost-crabs in their assessment. This was complemented by a gap analysis to identify thematic areas of anthropogenic pressures on sandy beach ecosystems that are under-represented in the published literature.Our meta-analysis demonstrates a broad geographic reach, encompassing studies on shores of the Pacific, Indian, and Atlantic Oceans, as well as the South China Sea. It also reveals what are, arguably, two major limitations: i) the near-universal use of proxies (i.e. burrow counts to estimate abundance) at the cost of directly measuring biological traits and bio-markers in the organism itself; and ii) descriptive or correlative study designs that rarely extend beyond a simple 'compare and contrast approach', and hence fail to identify the mechanistic cause(s) of observed contrasts.Evidence for a historically narrow range of assessed pressures (i.e., chiefly urbanisation, vehicles, beach nourishment, and recreation) is juxtaposed with rich opportunities for the broader integration of ghost crabs as a model taxon in studies of disturbance and impact assessments on ocean beaches. Tangible advances will most likely occur where ghost crabs provide foci for experiments that test specific hypotheses associated with effects of chemical, light and acoustic pollution, as well as the consequences of climate change (e.g. species range shifts).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All over the world, organizations are becoming more and more complex, and there’s a need to capture its complexity, so this is when the DEMO methodology (Design and Engineering Methodology for Organizations), created and developed by Jan L. G. Dietz, reaches its potential, which is to capture the structure of business processes in a coherent and consistent form of diagrams with their respective grammatical rules. The creation of WAMM (Wiki Aided Meta Modeling) platform was the main focus of this thesis, and had like principal precursor the idea to create a Meta-Editor that supports semantic data and uses MediaWiki. This prototype Meta-Editor uses MediaWiki as a receptor of data, and uses the ideas created in the Universal Enterprise Adaptive Object Model and the concept of Semantic Web, to create a platform that suits our needs, through Semantic MediaWiki, which helps the computer interconnect information and people in a more comprehensive, giving meaning to the content of the pages. The proposed Meta-Modeling platform allows the specification of the abstract syntax i.e., the grammar, and concrete syntax, e.g., symbols and connectors, of any language, as well as their model types and diagram types. We use the DEMO language as a proofof-concept and example. All such specifications are done in a coherent and formal way by the creation of semantic wiki pages and semantic properties connecting them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the design of a network-on-chip reconfigurable pseudorandom number generation unit that can map and execute meta-heuristic algorithms in hardware. The unit can be configured to implement one of the following five linear generator algorithms: a multiplicative congruential, a mixed congruential, a standard multiple recursive, a mixed multiple recursive, and a multiply-with-carry. The generation unit can be used both as a pseudorandom and a message passing-based server, which is able to produce pseudorandom numbers on demand, sending them to the network-on-chip blocks that originate the service request. The generator architecture has been mapped to a field programmable gate array, and showed that millions of numbers in 32-, 64-, 96-, or 128-bit formats can be produced in tens of milliseconds. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVETo determine the current status of the literature regarding the clinical efficacy and complication rates of cryoablation vs radiofrequency ablation in the treatment of small renal tumours.METHODSA review of the literature was conducted. There was no language restriction. Studies were obtained from the following sources: MEDLINE, EMBASE and LILACS.Inclusion criteria were (i) case series design with more than one case reported, (ii) use of cryoablation or radiofrequency ablation, (iii) patients with renal cell carcinoma and, (iv) outcome reported as clinical efficacy.When available, we also quantified the complication rates from each included study.Proportional meta-analysis was performed on both outcomes with a random-effects model. The 95% confidential intervals were also calculated.RESULTSThirty-one case series (20 cryoablation, 11 radiofrequency ablation) met all inclusion criteria.The pooled proportion of clinical efficacy was 89% in cryoablation therapy from a total of 457 cases. There was a statistically significant heterogeneity between these studies showing the inconsistency of clinical and methodological aspects.The pooled proportion of clinical efficacy was 90% in radiofrequency ablation therapy from a total of 426 cases. There was no statistically significant heterogeneity between these studies.There was no statistically significant difference regarding complications rate between cryoablation and radiofrequency ablation.CONCLUSIONSThis review shows that both ablation therapies have similar efficacy and complication rates.There is urgency for performing clinical trials with long-term data to establish which intervention is most suitable for the treatment of small renal masses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Major depressive disorder (MDD) trials - investigating either non-pharmacological or pharmacological interventions - have shown mixed results. Many reasons explain this heterogeneity, but one that stands out is the trial design due to specific challenges in the field. We aimed therefore to review the methodology of non-invasive brain stimulation (NIBS) trials and provide a framework to improve clinical trial design. We performed a systematic review for randomized, controlled MDD trials whose intervention was transcranial magnetic stimulation (rTMS) or transcranial direct current stimulation (tDCS) in MEDLINE and other databases from April 2002 to April 2008. We created an unstructured checklist based on CONSORT guidelines to extract items such as power analysis, sham method, blinding assessment, allocation concealment, operational criteria used for MDD, definition of refractory depression and primary study hypotheses. Thirty-one studies were included. We found that the main methodological issues can be divided in to three groups: (1) issues related to phase II/small trials, (2) issues related to MDD trials and, (3) specific issues of NIBS studies. Taken together, they can threaten study validity and lead to inconclusive results. Feasible solutions include: estimating the sample size a priori; measuring the degree of refractoriness of the subjects; specifying the primary hypothesis and statistical tests; controlling predictor variables through stratification randomization methods or using strict eligibility criteria; adjusting the study design to the target population; using adaptive designs and exploring NIBS efficacy employing biological markers. In conclusion, our study summarizes the main methodological issues of NIBS trials and proposes a number of alternatives to manage them. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recent reviews have indicated that low level level laser therapy (LLLT) is ineffective in lateral elbow tendinopathy (LET) without assessing validity of treatment procedures and doses or the influence of prior steroid injections. Methods Systematic review with meta-analysis, with primary outcome measures of pain relief and/or global improvement and subgroup analyses of methodological quality, wavelengths and treatment procedures. Results 18 randomised placebo-controlled trials (RCTs) were identified with 13 RCTs (730 patients) meeting the criteria for meta-analysis. 12 RCTs satisfied half or more of the methodological criteria. Publication bias was detected by Egger's graphical test, which showed a negative direction of bias. Ten of the trials included patients with poor prognosis caused by failed steroid injections or other treatment failures, or long symptom duration or severe baseline pain. The weighted mean difference (WMD) for pain relief was 10.2 mm [95% CI: 3.0 to 17.5] and the RR for global improvement was 1.36 [1.16 to 1.60]. Trials which targeted acupuncture points reported negative results, as did trials with wavelengths 820, 830 and 1064 nm. In a subgroup of five trials with 904 nm lasers and one trial with 632 nm wavelength where the lateral elbow tendon insertions were directly irradiated, WMD for pain relief was 17.2 mm [95% CI: 8.5 to 25.9] and 14.0 mm [95% CI: 7.4 to 20.6] respectively, while RR for global pain improvement was only reported for 904 nm at 1.53 [95% CI: 1.28 to 1.83]. LLLT doses in this subgroup ranged between 0.5 and 7.2 Joules. Secondary outcome measures of painfree grip strength, pain pressure threshold, sick leave and follow-up data from 3 to 8 weeks after the end of treatment, showed consistently significant results in favour of the same LLLT subgroup (p < 0.02). No serious side-effects were reported. Conclusion LLLT administered with optimal doses of 904 nm and possibly 632 nm wavelengths directly to the lateral elbow tendon insertions, seem to offer short-term pain relief and less disability in LET, both alone and in conjunction with an exercise regimen. This finding contradicts the conclusions of previous reviews which failed to assess treatment procedures, wavelengths and optimal doses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uno dei principali ambiti di ricerca dell’intelligenza artificiale concerne la realizzazione di agenti (in particolare, robot) in grado di aiutare o sostituire l’uomo nell’esecuzione di determinate attività. A tal fine, è possibile procedere seguendo due diversi metodi di progettazione: la progettazione manuale e la progettazione automatica. Quest’ultima può essere preferita alla prima nei contesti in cui occorra tenere in considerazione requisiti quali flessibilità e adattamento, spesso essenziali per lo svolgimento di compiti non banali in contesti reali. La progettazione automatica prende in considerazione un modello col quale rappresentare il comportamento dell’agente e una tecnica di ricerca (oppure di apprendimento) che iterativamente modifica il modello al fine di renderlo il più adatto possibile al compito in esame. In questo lavoro, il modello utilizzato per la rappresentazione del comportamento del robot è una rete booleana (Boolean network o Kauffman network). La scelta di tale modello deriva dal fatto che possiede una semplice struttura che rende agevolmente studiabili le dinamiche tuttavia complesse che si manifestano al suo interno. Inoltre, la letteratura recente mostra che i modelli a rete, quali ad esempio le reti neuronali artificiali, si sono dimostrati efficaci nella programmazione di robot. La metodologia per l’evoluzione di tale modello riguarda l’uso di tecniche di ricerca meta-euristiche in grado di trovare buone soluzioni in tempi contenuti, nonostante i grandi spazi di ricerca. Lavori precedenti hanno gia dimostrato l’applicabilità e investigato la metodologia su un singolo robot. Lo scopo di questo lavoro è quello di fornire prova di principio relativa a un insieme di robot, aprendo nuove strade per la progettazione in swarm robotics. In questo scenario, semplici agenti autonomi, interagendo fra loro, portano all’emergere di un comportamento coordinato adempiendo a task impossibili per la singola unità. Questo lavoro fornisce utili ed interessanti opportunità anche per lo studio delle interazioni fra reti booleane. Infatti, ogni robot è controllato da una rete booleana che determina l’output in funzione della propria configurazione interna ma anche dagli input ricevuti dai robot vicini. In questo lavoro definiamo un task in cui lo swarm deve discriminare due diversi pattern sul pavimento dell’arena utilizzando solo informazioni scambiate localmente. Dopo una prima serie di esperimenti preliminari che hanno permesso di identificare i parametri e il migliore algoritmo di ricerca, abbiamo semplificato l’istanza del problema per meglio investigare i criteri che possono influire sulle prestazioni. E’ stata così identificata una particolare combinazione di informazione che, scambiata localmente fra robot, porta al miglioramento delle prestazioni. L’ipotesi è stata confermata applicando successivamente questo risultato ad un’istanza più difficile del problema. Il lavoro si conclude suggerendo nuovi strumenti per lo studio dei fenomeni emergenti in contesti in cui le reti booleane interagiscono fra loro.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBIETTIVO: sintetizzare le evidenze disponibili sulla relazione tra i fattori di rischio (personali e lavorativi) e l’insorgenza della Sindrome del Tunnel Carpale (STC). METODI: è stata condotta una revisione sistematica della letteratura su database elettronici considerando gli studi caso-controllo e di coorte. Abbiamo valutato la qualità del reporting degli studi con la checklist STROBE. Le stime studio-specifiche sono state espresse come OR (IC95%) e combinate con una meta-analisi condotta con un modello a effetti casuali. La presenza di eventuali bias di pubblicazione è stata valutata osservando l’asimmetria del funnel plot e con il test di Egger. RISULTATI: Sono stati selezionati 29 studi di cui 19 inseriti nella meta-analisi: 13 studi caso-controllo e 6 di coorte. La meta-analisi ha mostrato un aumento significativo di casi di STC tra i soggetti obesi sia negli studi caso-controllo [OR 2,4 (1,9-3,1); I(2)=70,7%] che in quelli di coorte [OR 2,0 (1,6-2,7); I(2)=0%]. L'eterogeneità totale era significativa (I(2)=59,6%). Risultati simili si sono ottenuti per i diabetici e soggetti affetti da malattie della tiroide. L’esposizione al fumo non era associata alla STC sia negli studi caso-controllo [OR 0,7 (0,4-1,1); I(2)=83,2%] che di coorte [OR 0,8 (0,6-1,2); I(2)=45,8%]. A causa delle molteplici modalità di valutazione non è stato possibile calcolare una stima combinata delle esposizioni professionali con tecniche meta-analitiche. Dalla revisione, è risultato che STC è associata con: esposizione a vibrazioni, movimenti ripetitivi e posture incongrue di mano-polso. CONCLUSIONI: I risultati della revisione sistematica confermano le evidenze dell'esistenza di un'associazione tra fattori di rischio personali e STC. Nonostante la diversa qualità dei dati sull'esposizione e le differenze degli effetti dei disegni di studio, i nostri risultati indicano elementi di prova sufficienti di un legame tra fattori di rischio professionali e STC. La misurazione dell'esposizione soprattutto per i fattori di rischio professionali, è un obiettivo necessario per studi futuri.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background There is concern that non-inferiority trials might be deliberately designed to conceal that a new treatment is less effective than a standard treatment. In order to test this hypothesis we performed a meta-analysis of non-inferiority trials to assess the average effect of experimental treatments compared with standard treatments. Methods One hundred and seventy non-inferiority treatment trials published in 121 core clinical journals were included. The trials were identified through a search of PubMed (1991 to 20 February 2009). Combined relative risk (RR) from meta-analysis comparing experimental with standard treatments was the main outcome measure. Results The 170 trials contributed a total of 175 independent comparisons of experimental with standard treatments. The combined RR for all 175 comparisons was 0.994 [95% confidence interval (CI) 0.978–1.010] using a random-effects model and 1.002 (95% CI 0.996–1.008) using a fixed-effects model. Of the 175 comparisons, experimental treatment was considered to be non-inferior in 130 (74%). The combined RR for these 130 comparisons was 0.995 (95% CI 0.983–1.006) and the point estimate favoured the experimental treatment in 58% (n = 76) and standard treatment in 42% (n = 54). The median non-inferiority margin (RR) pre-specified by trialists was 1.31 [inter-quartile range (IQR) 1.18–1.59]. Conclusion In this meta-analysis of non-inferiority trials the average RR comparing experimental with standard treatments was close to 1. The experimental treatments that gain a verdict of non-inferiority in published trials do not appear to be systematically less effective than the standard treatments. Importantly, publication bias and bias in the design and reporting of the studies cannot be ruled out and may have skewed the study results in favour of the experimental treatments. Further studies are required to examine the importance of such bias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To examine the presence and extent of small study effects in clinical osteoarthritis research. Design Meta-epidemiological study. Data sources 13 meta-analyses including 153 randomised trials (41 605 patients) that compared therapeutic interventions with placebo or non-intervention control in patients with osteoarthritis of the hip or knee and used patients’ reported pain as an outcome. Methods We compared estimated benefits of treatment between large trials (at least 100 patients per arm) and small trials, explored funnel plots supplemented with lines of predicted effects and contours of significance, and used three approaches to estimate treatment effects: meta-analyses including all trials irrespective of sample size, meta-analyses restricted to large trials, and treatment effects predicted for large trials. Results On average, treatment effects were more beneficial in small than in large trials (difference in effect sizes −0.21, 95% confidence interval −0.34 to −0.08, P=0.001). Depending on criteria used, six to eight funnel plots indicated small study effects. In six of 13 meta-analyses, the overall pooled estimate suggested a clinically relevant, significant benefit of treatment, whereas analyses restricted to large trials and predicted effects in large trials yielded smaller non-significant estimates. Conclusions Small study effects can often distort results of meta-analyses. The influence of small trials on estimated treatment effects should be routinely assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine the effect of glucosamine, chondroitin, or the two in combination on joint pain and on radiological progression of disease in osteoarthritis of the hip or knee. Design Network meta-analysis. Direct comparisons within trials were combined with indirect evidence from other trials by using a Bayesian model that allowed the synthesis of multiple time points. MAIN OUTCOME MEASURE: Pain intensity. Secondary outcome was change in minimal width of joint space. The minimal clinically important difference between preparations and placebo was prespecified at -0.9 cm on a 10 cm visual analogue scale. DATA SOURCES: Electronic databases and conference proceedings from inception to June 2009, expert contact, relevant websites. Eligibility criteria for selecting studies Large scale randomised controlled trials in more than 200 patients with osteoarthritis of the knee or hip that compared glucosamine, chondroitin, or their combination with placebo or head to head. Results 10 trials in 3803 patients were included. On a 10 cm visual analogue scale the overall difference in pain intensity compared with placebo was -0.4 cm (95% credible interval -0.7 to -0.1 cm) for glucosamine, -0.3 cm (-0.7 to 0.0 cm) for chondroitin, and -0.5 cm (-0.9 to 0.0 cm) for the combination. For none of the estimates did the 95% credible intervals cross the boundary of the minimal clinically important difference. Industry independent trials showed smaller effects than commercially funded trials (P=0.02 for interaction). The differences in changes in minimal width of joint space were all minute, with 95% credible intervals overlapping zero. Conclusions Compared with placebo, glucosamine, chondroitin, and their combination do not reduce joint pain or have an impact on narrowing of joint space. Health authorities and health insurers should not cover the costs of these preparations, and new prescriptions to patients who have not received treatment should be discouraged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Group training in communication skills [communication skills training (CST)] has become partly mandatory for oncology staff. However, so far, a comprehensive meta-analysis on the efficacy is lacking. Design: Included studies either compare the efficacy of a specific training with a control group or look at the additional effect of booster sessions on communication behaviour, attitudes or patient outcomes. Methods: Four electronic databases were searched up to July 2008 without language restriction, and reference lists of earlier reviews were screened. Effect sizes (ESs) were extracted and pooled in random effects meta-analyses. Results: We included 13 trials (three non-randomised), 10 with no specific intervention in the control group. Meta-analysis showed a moderate effect of CST on communication behaviour ES = 0.54. Three trials compared basic training courses with more extensive training courses and showed a small additional effect on communication skills ES = 0.37. Trials investigating participants' attitudes ES = 0.35 and patient outcomes ES = 0.13 (trend) confirmed this effect. Conclusions: Training health professionals by CST is a promising approach to change communication behaviour and attitudes. Patients might also benefit from specifically trained health professionals but strong studies are lacking. However, feasibility and economic aspects have to be kept in mind when considering providing a training of optimal length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To analyse the available evidence on cardiovascular safety of non-steroidal anti-inflammatory drugs. Design Network meta-analysis. Data sources Bibliographic databases, conference proceedings, study registers, the Food and Drug Administration website, reference lists of relevant articles, and reports citing relevant articles through the Science Citation Index (last update July 2009). Manufacturers of celecoxib and lumiracoxib provided additional data. Study selection All large scale randomised controlled trials comparing any non-steroidal anti-inflammatory drug with other non-steroidal anti-inflammatory drugs or placebo. Two investigators independently assessed eligibility. Data extraction The primary outcome was myocardial infarction. Secondary outcomes included stroke, death from cardiovascular disease, and death from any cause. Two investigators independently extracted data. Data synthesis 31 trials in 116 429 patients with more than 115 000 patient years of follow-up were included. Patients were allocated to naproxen, ibuprofen, diclofenac, celecoxib, etoricoxib, rofecoxib, lumiracoxib, or placebo. Compared with placebo, rofecoxib was associated with the highest risk of myocardial infarction (rate ratio 2.12, 95% credibility interval 1.26 to 3.56), followed by lumiracoxib (2.00, 0.71 to 6.21). Ibuprofen was associated with the highest risk of stroke (3.36, 1.00 to 11.6), followed by diclofenac (2.86, 1.09 to 8.36). Etoricoxib (4.07, 1.23 to 15.7) and diclofenac (3.98, 1.48 to 12.7) were associated with the highest risk of cardiovascular death. Conclusions Although uncertainty remains, little evidence exists to suggest that any of the investigated drugs are safe in cardiovascular terms. Naproxen seemed least harmful. Cardiovascular risk needs to be taken into account when prescribing any non-steroidal anti-inflammatory drug.