954 resultados para Electromyography analysis techniques
Resumo:
The aim of this paper is to illustrate the measurement of productive efficiency using Nerlovian indicator and metafrontier with data envelopment analysis techniques. Further, we illustrate how profit efficiency of firms operating in different regions can be aggregated into one overarching frontier. Sugarcane production in three regions in Kenya has been used to illustrate these concepts. Results show that the sources of inefficiency in all regions are both technical and allocative, but allocative efficiency contributes more to the overall Nerlovian (in)efficiency indicator. © 2011 Springer-Verlag.
Resumo:
Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.
Resumo:
Preliminary work is reported on 2-D and 3-D microstructures written directly with a Yb:YAG 1026 nm femtosecond (fs) laser on bulk chemical vapour deposition (CVD) single-crystalline diamond. Smooth graphitic lines and other structures were written on the surface of a CVD diamond sample with a thickness of 0.7mm under low laser fluences. This capability opens up the opportunity for making electronic devices and micro-electromechanical structures on diamond substrates. The fabrication process was optimised through testing a range of laser energies at a 100 kHz repetition rate with sub-500fs pulses. These graphitic lines and structures have been characterised using optical microscopy, Raman spectroscopy, X-ray photoelectron spectroscopy and atomic force microscopy. Using these analysis techniques, the formation of sp2 and sp3 bonds is explored and the ratio between sp2 and sp3 bonds after fs laser patterning is quantified. We present the early findings from this study and characterise the relationship between the graphitic line formation and the different fs laser exposure conditions. © 2012 Taylor & Francis.
Resumo:
A novel approach of automatic ECG analysis based on scale-scale signal representation is proposed. The approach uses curvature scale-space representation to locate main ECG waveform limits and peaks and may be used to correct results of other ECG analysis techniques or independently. Moreover dynamic matching of ECG CSS representations provides robust preliminary recognition of ECG abnormalities which has been proven by experimental results.
Resumo:
This dissertation examines the sociological process of conflict resolution and consensus building in South Florida Everglades Ecosystem Restoration through what I define as a Network Management Coordinative Interstitial Group (NetMIG). The process of conflict resolution can be summarized as the participation of interested and affected parties (stakeholders) in a forum of negotiation. I study the case of the Governor's Commission for a Sustainable South Florida (GCSSF) that was established to reduce social conflict. Such conflict originated from environmental disputes about the Everglades and was manifested in the form of gridlock among regulatory (government) agencies, Indian tribes, as well as agricultural, environmental conservationist and urban development interests. The purpose of the participatory forum is to reduce conflicts of interest and to achieve consensus, with the ultimate goal of restoration of the original Everglades ecosystem, while cultivating the economic and cultural bases of the communities in the area. Further, the forum aim to formulate consensus through envisioning a common sustainable community by providing means to achieve a balance between human and natural systems. ^ Data were gathered using participant observation and document analysis techniques to conduct a theoretically based analysis of the role of the Network Management Coordinative Interstitial Group (NetMIG). I use conflict resolution theory, environmental conflict theory, stakeholder analysis, systems theory, differentiation and social change theory, and strategic management and planning theory. ^ The purpose of this study is to substantiate the role of the Governor's Commission for a Sustainable South Florida (GCSSF) as a consortium of organizations in an effort to resolve conflict rather than an ethnographic study of this organization. Environmental restoration of the Everglades is a vehicle for recognizing the significance of a Network Management Coordinative Interstitial Group (NetMIG), namely the Governor's Commission for a Sustainable South Florida (GCSSF), as a structural mechanism for stakeholder participation in the process of social conflict resolution through the creation of new cultural paradigms for a sustainable community. ^
Resumo:
Software architecture is the abstract design of a software system. It plays a key role as a bridge between requirements and implementation, and is a blueprint for development. The architecture represents a set of early design decisions that are crucial to a system. Mistakes in those decisions are very costly if they remain undetected until the system is implemented and deployed. This is where formal specification and analysis fits in. Formal specification makes sure that an architecture design is represented in a rigorous and unambiguous way. Furthermore, a formally specified model allows the use of different analysis techniques for verifying the correctness of those crucial design decisions. ^ This dissertation presented a framework, called SAM, for formal specification and analysis of software architectures. In terms of specification, formalisms and mechanisms were identified and chosen to specify software architecture based on different analysis needs. Formalisms for specifying properties were also explored, especially in the case of non-functional properties. In terms of analysis, the dissertation explored both the verification of functional properties and the evaluation of non-functional properties of software architecture. For the verification of functional property, methodologies were presented on how to apply existing model checking techniques on a SAM model. For the evaluation of non-functional properties, the dissertation first showed how to incorporate stochastic information into a SAM model, and then explained how to translate the model to existing tools and conducts the analysis using those tools. ^ To alleviate the analysis work, we also provided a tool to automatically translate a SAM model for model checking. All the techniques and methods described in the dissertation were illustrated by examples or case studies, which also served a purpose of advocating the use of formal methods in practice. ^
Resumo:
The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^
Resumo:
The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, μXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.
Resumo:
The elemental analysis of soil is useful in forensic and environmental sciences. Methods were developed and optimized for two laser-based multi-element analysis techniques: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS). This work represents the first use of a 266 nm laser for forensic soil analysis by LIBS. Sample preparation methods were developed and optimized for a variety of sample types, including pellets for large bulk soil specimens (470 mg) and sediment-laden filters (47 mg), and tape-mounting for small transfer evidence specimens (10 mg). Analytical performance for sediment filter pellets and tape-mounted soils was similar to that achieved with bulk pellets. An inter-laboratory comparison exercise was designed to evaluate the performance of the LA-ICP-MS and LIBS methods, as well as for micro X-ray fluorescence (μXRF), across multiple laboratories. Limits of detection (LODs) were 0.01-23 ppm for LA-ICP-MS, 0.25-574 ppm for LIBS, 16-4400 ppm for μXRF, and well below the levels normally seen in soils. Good intra-laboratory precision (≤ 6 % relative standard deviation (RSD) for LA-ICP-MS; ≤ 8 % for μXRF; ≤ 17 % for LIBS) and inter-laboratory precision (≤ 19 % for LA-ICP-MS; ≤ 25 % for μXRF) were achieved for most elements, which is encouraging for a first inter-laboratory exercise. While LIBS generally has higher LODs and RSDs than LA-ICP-MS, both were capable of generating good quality multi-element data sufficient for discrimination purposes. Multivariate methods using principal components analysis (PCA) and linear discriminant analysis (LDA) were developed for discriminations of soils from different sources. Specimens from different sites that were indistinguishable by color alone were discriminated by elemental analysis. Correct classification rates of 94.5 % or better were achieved in a simulated forensic discrimination of three similar sites for both LIBS and LA-ICP-MS. Results for tape-mounted specimens were nearly identical to those achieved with pellets. Methods were tested on soils from USA, Canada and Tanzania. Within-site heterogeneity was site-specific. Elemental differences were greatest for specimens separated by large distances, even within the same lithology. Elemental profiles can be used to discriminate soils from different locations and narrow down locations even when mineralogy is similar.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, µXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.
Resumo:
BACKGROUND: The aim of this study was to investigate the association between statin use and survival in a population-based colorectal cancer (CRC) cohort and perform an updated meta-analysis to quantify the magnitude of any association.
METHODS: A cohort of 8391 patients with newly diagnosed Dukes' A-C CRC (2009-2012) was identified from the Scottish Cancer Registry. This cohort was linked to the Prescribing Information System and the National Records of Scotland Death Records (until January 2015) to identify 1064 colorectal cancer-specific deaths. Adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for cancer-specific mortality by statin use were calculated using time dependent Cox regression models. The systematic review included relevant studies published before January 2016. Meta-analysis techniques were used to derive combined HRs for associations between statin use and cancer-specific and overall mortality.
RESULTS: In the Scottish cohort, statin use before diagnosis (HR=0.84, 95% CI 0.75-0.94), but not after (HR=0.90, 95% CI 0.77-1.05), was associated with significantly improved cancer-specific mortality. The systematic review identified 15 relevant studies. In the meta-analysis, there was consistent (I(2)=0%,heterogeneity P=0.57) evidence of a reduction in cancer-specific mortality with statin use before diagnosis in 6 studies (n=86,622, pooled HR=0.82, 95% CI 0.79-0.86) but this association was less apparent and more heterogeneous (I(2)=67%,heterogeneity P=0.03) with statin use after diagnosis in 4 studies (n=19,152, pooled HR=0.84, 95% CI 0.68-1.04).
CONCLUSION: In a Scottish CRC cohort and updated meta-analysis there was some evidence that statin use was associated with improved survival. However, these associations were weak in magnitude and, particularly for post-diagnosis use, varied markedly between studies.
Resumo:
Part 19: Knowledge Management in Networks
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
The aim of this study is to determine which social agents are involved in the political debate on Twitter and whether the interpretive hegemony of actors that have traditionally been the most prominent is tempered by the challenge of framing shared with audiences. The relationship between the interpretations expressed and the profiles of participants is analyzed in comparison with the frames used by mainstream media. The chosen methodology combines content analysis and discourse analysis techniques on a sample of 1,504 relevant tweets posted on two political issues –the approval of the education law LOMCE and the evictions caused by the crisis, which have also been studied in the front pages of four leading newspapers in Spain. The results show a correlation between political issue singularities, frames and the type of discussion depending on the participants.