924 resultados para 280401 Analysis of Algorithms and Complexity
Resumo:
This paper discusses the main characteristics and presents a comparative analysis of three synchronization algorithms based respectively, on a Phase-Locked Loop, a Kalman Filter and a Discrete Fourier Transform. It will be described the single and three-phase models of the first two methods and the single-phase model of the third one. Details on how to modify the filtering properties or dynamic response of each algorithm will be discussed in terms of their design parameters. In order to compare the different algorithms, these parameters will be set for maximum filter capability. Then, the dynamic response, during input amplitude and frequency deviations will be observed, as well as during the initialization procedure. So, advantages and disadvantages of all considered algorithms will be discussed. ©2007 IEEE.
Resumo:
OBJECTIVES The purpose of this study was to compare the 2-year safety and effectiveness of new- versus early-generation drug-eluting stents (DES) according to the severity of coronary artery disease (CAD) as assessed by the SYNTAX (Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) score. BACKGROUND New-generation DES are considered the standard-of-care in patients with CAD undergoing percutaneous coronary intervention. However, there are few data investigating the effects of new- over early-generation DES according to the anatomic complexity of CAD. METHODS Patient-level data from 4 contemporary, all-comers trials were pooled. The primary device-oriented clinical endpoint was the composite of cardiac death, myocardial infarction, or ischemia-driven target-lesion revascularization (TLR). The principal effectiveness and safety endpoints were TLR and definite stent thrombosis (ST), respectively. Adjusted hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated at 2 years for overall comparisons, as well as stratified for patients with lower (SYNTAX score ≤11) and higher complexity (SYNTAX score >11). RESULTS A total of 6,081 patients were included in the study. New-generation DES (n = 4,554) compared with early-generation DES (n = 1,527) reduced the primary endpoint (HR: 0.75 [95% CI: 0.63 to 0.89]; p = 0.001) without interaction (p = 0.219) between patients with lower (HR: 0.86 [95% CI: 0.64 to 1.16]; p = 0.322) versus higher CAD complexity (HR: 0.68 [95% CI: 0.54 to 0.85]; p = 0.001). In patients with SYNTAX score >11, new-generation DES significantly reduced TLR (HR: 0.36 [95% CI: 0.26 to 0.51]; p < 0.001) and definite ST (HR: 0.28 [95% CI: 0.15 to 0.55]; p < 0.001) to a greater extent than in the low-complexity group (TLR pint = 0.059; ST pint = 0.013). New-generation DES decreased the risk of cardiac mortality in patients with SYNTAX score >11 (HR: 0.45 [95% CI: 0.27 to 0.76]; p = 0.003) but not in patients with SYNTAX score ≤11 (pint = 0.042). CONCLUSIONS New-generation DES improve clinical outcomes compared with early-generation DES, with a greater safety and effectiveness in patients with SYNTAX score >11.
Resumo:
Magnetoencephalography (MEG) allows the real-time recording of neural activity and oscillatory activity in distributed neural networks. We applied a non-linear complexity analysis to resting-state neural activity as measured using whole-head MEG. Recordings were obtained from 20 unmedicated patients with major depressive disorder and 19 matched healthy controls. Subsequently, after 6 months of pharmacological treatment with the antidepressant mirtazapine 30 mg/day, patients received a second MEG scan. A measure of the complexity of neural signals, the Lempel–Ziv Complexity (LZC), was derived from the MEG time series. We found that depressed patients showed higher pre-treatment complexity values compared with controls, and that complexity values decreased after 6 months of effective pharmacological treatment, although this effect was statistically significant only in younger patients. The main treatment effect was to recover the tendency observed in controls of a positive correlation between age and complexity values. Importantly, the reduction of complexity with treatment correlated with the degree of clinical symptom remission. We suggest that LZC, a formal measure of neural activity complexity, is sensitive to the dynamic physiological changes observed in depression and may potentially offer an objective marker of depression and its remission after treatment.
Resumo:
In this paper we present a tool to carry out the multifractal analysis of binary, two-dimensional images through the calculation of the Rényi D(q) dimensions and associated statistical regressions. The estimation of a (mono)fractal dimension corresponds to the special case where the moment order is q = 0.
Resumo:
Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz–Mancini–Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p < 0.01). In addition, statistically significant differences between MCI subjects and controls were achieved by ED and LMC (p < 0.05). In order to assess the diagnostic ability of the parameters, a linear discriminant analysis with a leave-one-out cross-validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
Background: The inherent complexity of statistical methods and clinical phenomena compel researchers with diverse domains of expertise to work in interdisciplinary teams, where none of them have a complete knowledge in their counterpart's field. As a result, knowledge exchange may often be characterized by miscommunication leading to misinterpretation, ultimately resulting in errors in research and even clinical practice. Though communication has a central role in interdisciplinary collaboration and since miscommunication can have a negative impact on research processes, to the best of our knowledge, no study has yet explored how data analysis specialists and clinical researchers communicate over time. Methods/Principal Findings: We conducted qualitative analysis of encounters between clinical researchers and data analysis specialists (epidemiologist, clinical epidemiologist, and data mining specialist). These encounters were recorded and systematically analyzed using a grounded theory methodology for extraction of emerging themes, followed by data triangulation and analysis of negative cases for validation. A policy analysis was then performed using a system dynamics methodology looking for potential interventions to improve this process. Four major emerging themes were found. Definitions using lay language were frequently employed as a way to bridge the language gap between the specialties. Thought experiments presented a series of ""what if'' situations that helped clarify how the method or information from the other field would behave, if exposed to alternative situations, ultimately aiding in explaining their main objective. Metaphors and analogies were used to translate concepts across fields, from the unfamiliar to the familiar. Prolepsis was used to anticipate study outcomes, thus helping specialists understand the current context based on an understanding of their final goal. Conclusion/Significance: The communication between clinical researchers and data analysis specialists presents multiple challenges that can lead to errors.
Resumo:
Since 2000, the southwestern Brazilian Amazon has undergone a rapid transformation from natural vegetation and pastures to row-crop agricultural with the potential to affect regional biogeochemistry. The goals of this research are to assess wavelet algorithms applied to MODIS time series to determine expansion of row-crops and intensification of the number of crops grown. MODIS provides data from February 2000 to present, a period of agricultural expansion and intensification in the southwestern Brazilian Amazon. We have selected a study area near Comodoro, Mato Grosso because of the rapid growth of row-crop agriculture and availability of ground truth data of agricultural land-use history. We used a 90% power wavelet transform to create a wavelet-smoothed time series for five years of MODIS EVI data. From this wavelet-smoothed time series we determine characteristic phenology of single and double crops. We estimate that over 3200 km(2) were converted from native vegetation and pasture to row-crop agriculture from 2000 to 2005 in our study area encompassing 40,000 km(2). We observe an increase of 2000 km(2) of agricultural intensification, where areas of single crops were converted to double crops during the study period. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we propose an approach to the transient and steady-state analysis of the affine combination of one fast and one slow adaptive filters. The theoretical models are based on expressions for the excess mean-square error (EMSE) and cross-EMSE of the component filters, which allows their application to different combinations of algorithms, such as least mean-squares (LMS), normalized LMS (NLMS), and constant modulus algorithm (CMA), considering white or colored inputs and stationary or nonstationary environments. Since the desired universal behavior of the combination depends on the correct estimation of the mixing parameter at every instant, its adaptation is also taken into account in the transient analysis. Furthermore, we propose normalized algorithms for the adaptation of the mixing parameter that exhibit good performance. Good agreement between analysis and simulation results is always observed.
Resumo:
The common approach of bioelectrical impedance analysis to estimate body water uses a wrist-to-ankle methodology which, although not indicated by theory, has the advantage of ease of application particularly for clinical studies involving patients with debilitating diseases. A number of authors have suggested the use of a segmental protocol in which the impedances of the trunk and limbs are measured separately to provide a methodology more in keeping with basic theory. The segmental protocol hits not, however, been generally adopted, partly because of the increased complexity involved in its application, and partly because studies comparing the two methodologies have not clearly demonstrated a significant improvement from the segmental methodology. We have conducted a small pilot study involving ten subjects to investigate the efficacy of the two methodologies in a group of normal subjects. The study did not require the independent measure of body water, by for example isotope dilution, as the subjects were maintained in a state of constant hydration with only the distribution between limbs and trunk changing as a result of change in posture. The results demonstrate a significant difference between the two methodologies in predicting the expected constancy of body water in this study, with the segmental methodology indicating a mean percentage change in extracellular water of -2.2%; which was not significantly different from the expected null result, whereas the wrist-to-ankle methodology indicated a mean percentage change in extracellular water of -6.6%. This is significantly different from the null result, and from the value obtained from the segmental methodology (p = 0.006). Similar results were obtained using estimates of total body water from the two methodologies. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
Neurological disease or dysfunction in newborn infants is often first manifested by seizures. Prolonged seizures can result in impaired neurodevelopment or even death. In adults, the clinical signs of seizures are well defined and easily recognized. In newborns, however, the clinical signs are subtle and may be absent or easily missed without constant close observation. This article describes the use of adaptive signal processing techniques for removing artifacts from newborn electroencephalogram (EEG) signals. Three adaptive algorithms have been designed in the context of EEG signals. This preprocessing is necessary before attempting a fine time-frequency analysis of EEG rhythmical activities, such as electrical seizures, corrupted by high amplitude signals. After an overview of newborn EEG signals, the authors describe the data acquisition set-up. They then introduce the basic physiological concepts related to normal and abnormal newborn EEGs and discuss the three adaptive algorithms for artifact removal. They also present time-frequency representations (TFRs) of seizure signals and discuss the estimation and modeling of the instantaneous frequency related to the main ridge of the TFR.
Resumo:
Observational data collected in the Lake Tekapo hydro catchment of the Southern Alps in New Zealand are used to analyse the wind and temperature fields in the alpine lake basin during summertime fair weather conditions. Measurements from surface stations, pilot balloon and tethersonde soundings, Doppler sodar and an instrumented light aircraft provide evidence of multi-scale interacting wind systems, ranging from microscale slope winds to mesoscale coast-to-basin flows. Thermal forcing of the winds occurred due to differential heating as a consequence of orography and heterogeneous surface features, which is quantified by heat budget and pressure field analysis. The daytime vertical temperature structure was characterised by distinct layering. Features of particular interest are the formation of thermal internal boundary layers due to the lake-land discontinuity and the development of elevated mixed layers. The latter were generated by advective heating from the basin and valley sidewalls by slope winds and by a superimposed valley wind blowing from the basin over Lake Tekapo and up the tributary Godley Valley. Daytime heating in the basin and its tributary valleys caused the development of a strong horizontal temperature gradient between the basin atmosphere and that over the surrounding landscape, and hence the development of a mesoscale heat low over the basin. After noon, air from outside the basin started flowing over mountain saddles into the basin causing cooling in the lowest layers, whereas at ridge top height the horizontal air temperature gradient between inside and outside the basin continued to increase. In the early evening, a more massive intrusion of cold air caused rapid cooling and a transition to a rather uniform slightly stable stratification up to about 2000 m agl. The onset time of this rapid cooling varied about 1-2 h between observation sites and was probably triggered by the decay of up-slope winds inside the basin, which previously countered the intrusion of air over the surrounding ridges. The intrusion of air from outside the basin continued until about mid-night, when a northerly mountain wind from the Godley Valley became dominant. The results illustrate the extreme complexity that can be caused by the operation of thermal forcing processes at a wide range of spatial scales.
Resumo:
Intensity Modulated Radiotherapy (IMRT) is a technique introduced to shape more precisely the dose distributions to the tumour, providing a higher dose escalation in the volume to irradiate and simultaneously decreasing the dose in the organs at risk which consequently reduces the treatment toxicity. This technique is widely used in prostate and head and neck (H&N) tumours. Given the complexity and the use of high doses in this technique it’s necessary to ensure as a safe and secure administration of the treatment, through the use of quality control programmes for IMRT. The purpose of this study was to evaluate statistically the quality control measurements that are made for the IMRT plans in prostate and H&N patients, before the beginning of the treatment, analysing their variations, the percentage of rejected and repeated measurements, the average, standard deviations and the proportion relations.