829 resultados para Optimal filtering
Resumo:
Purpose: To estimate the metabolic activity of rectal cancers at 6 and 12 weeks after completion of chemoradiation therapy (CRT) by 2-[fluorine-18] fluoro-2-deoxy-D-glucose-labeled positron emission tomography/computed tomography ([18 FDG] PET/CT) imaging and correlate with response to CRT. Methods and Materials: Patients with cT2-4N0-2M0 distal rectal adenocarcinoma treated with long-course neoadjuvant CRT (54 Gy, 5-fluouracil-based) were prospectively studied (ClinicalTrials. org identifier NCT00254683). All patients underwent 3 PET/CT studies (at baseline and 6 and 12 weeks fromCRT completion). Clinical assessment was at 12 weeks. Maximal standard uptakevalue (SUVmax) of the primary tumor wasmeasured and recorded at eachPET/CTstudy after 1 h (early) and3 h (late) from 18 FDGinjection. Patientswith an increase in early SUVmax between 6 and 12 weeks were considered " bad" responders and the others as "good" responders. Results: Ninety-one patients were included; 46 patients (51%) were "bad" responders, whereas 45 (49%) patients were " good" responders. " Bad" responders were less likely to develop complete clinical response (6.5% vs. 37.8%, respectively; PZ. 001), less likely to develop significant histological tumor regression (complete or near-complete pathological response; 16% vs. 45%, respectively; PZ. 008) and exhibited greater final tumor dimension (4.3cmvs. 3.3cm; PZ. 03). Decrease between early (1 h) and late (3 h) SUVmax at 6-week PET/CTwas a significant predictor of " good" response (accuracy of 67%). Conclusions: Patients who developed an increase in SUVmax after 6 weeks were less likely to develop significant tumor downstaging. Early-late SUVmax variation at 6-week PET/CT may help identify these patients and allow tailored selection of CRT-surgery intervals for individual patients. (C) 2012 Elsevier Inc.
Resumo:
Background The optimal revascularization strategy for diabetic patients with multivessel coronary artery disease (MVD) remains uncertain for lack of an adequately powered, randomized trial. The FREEDOM trial was designed to compare contemporary coronary artery bypass grafting (CABG) to percutaneous coronary intervention (PCI) with drug-eluting stents in diabetic patients with MVD against a background of optimal medical therapy. Methods A total of 1,900 diabetic participants with MVD were randomized to PCI or CABG worldwide from April 2005 to March 2010. FREEDOM is a superiority trial with a mean follow-up of 4.37 years (minimum 2 years) and 80% power to detect a 27.0% relative reduction. We present the baseline characteristics of patients screened and randomized, and provide a comparison with other MVD trials involving diabetic patients. Results The randomized cohort was 63.1 +/- 9.1 years old and 29% female, with a median diabetes duration of 10.2 +/- 8.9 years. Most (83%) had 3-vessel disease and on average took 5.5 +/- 1.7 vascular medications, with 32% on insulin therapy. Nearly all had hypertension and/or dyslipidemia, and 26% had a prior myocardial infarction. Mean hemoglobin A1c was 7.8 +/- 1.7 mg/dL, 29% had low-density lipoprotein <70 mg/dL, and mean systolic blood pressure was 134 +/- 20 mm Hg. The mean SYNTAX score was 26.2 with a symmetric distribution. FREEDOM trial participants have baseline characteristics similar to those of contemporary multivessel and diabetes trial cohorts. Conclusions The FREEDOM trial has successfully recruited a high-risk diabetic MVD cohort. Follow-up efforts include aggressive monitoring to optimize background risk factor control. FREEDOM will contribute significantly to the PCI versus CABG debate in diabetic patients with MVD. (Am Heart J 2012;164:591-9.)
Resumo:
This work presents the application of Linear Matrix Inequalities to the robust and optimal adjustment of Power System Stabilizers with pre-defined structure. Results of some tests show that gain and zeros adjustments are sufficient to guarantee robust stability and performance with respect to various operating points. Making use of the flexible structure of LMI's, we propose an algorithm that minimizes the norm of the controllers gain matrix while it guarantees the damping factor specified for the closed loop system, always using a controller with flexible structure. The technique used here is the pole placement, whose objective is to place the poles of the closed loop system in a specific region of the complex plane. Results of tests with a nine-machine system are presented and discussed, in order to validate the algorithm proposed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Ultrasonography has an inherent noise pattern, called speckle, which is known to hamper object recognition for both humans and computers. Speckle noise is produced by the mutual interference of a set of scattered wavefronts. Depending on the phase of the wavefronts, the interference may be constructive or destructive, which results in brighter or darker pixels, respectively. We propose a filter that minimizes noise fluctuation while simultaneously preserving local gray level information. It is based on steps to attenuate the destructive and constructive interference present in ultrasound images. This filter, called interference-based speckle filter followed by anisotropic diffusion (ISFAD), was developed to remove speckle texture from B-mode ultrasound images, while preserving the edges and the gray level of the region. The ISFAD performance was compared with 10 other filters. The evaluation was based on their application to images simulated by Field II (developed by Jensen et al.) and the proposed filter presented the greatest structural similarity, 0.95. Functional improvement of the segmentation task was also measured, comparing rates of true positive, false positive and accuracy. Using three different segmentation techniques, ISFAD also presented the best accuracy rate (greater than 90% for structures with well-defined borders). (E-mail: fernando.okara@gmail.com) (C) 2012 World Federation for Ultrasound in Medicine & Biology.
Resumo:
The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method The cost-effectiveness of the Optimal® and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U.S. dollars. Sensitivity analysis was performed considering key model parameters. Results In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion Microscopy is more cost-effective than OptiMal® in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
The research is focused on the relationship between some Mg2+-dependent ATPase activities of plasma- and mitochondrial membranes from tissues of cultured marine bivalve molluscs and potentially stressful environmental conditions, such as the exposure to contaminants both of natural origin (ammonia nitrogen, the main contaminant of aquaculture plants) and of anthropic source (alkyltins). The two filter-feeding bivalve species selected colonize different habitats: the common mussel Mytilus galloprovincialis binds to hard substrates and the Philippine clam Tapes philippinarum burrows into sea bottom sandy beds. The choice of typical species of coastal waters, extremely suitable for environmental studies due to their features of poor motility, resistance to transport and great filtering efficiency, may constitute a model to evaluate responses to contaminants of membrane-bound enzyme activities involved in key biochemical mechanisms, namely cell ionic regulation and mitochondrial energy production. In vitro and in vitro approaches have been pursued. In vitro assays were carried out by adding the contaminants (NH4Cl and alkyltins) directly to the ATPase reaction media. In vivo experiments were carried out by exposing mussels to various tributyl tin (TBT) concentrations under controlled conditions in aquaria. ATPase activities were determined spectrophotometrically according to the principles of the method of Fiske and Subbarow (1925). The main results obtained are detailed below. In Tapes philippinarum the interaction of NH4 +, the main form of ammonia nitrogen at physiological and seawater pHs, with the Na,K-ATPase and the ouabaininsensitive Na-ATPase was investigated in vitro on gill and mantle microsomal membranes. The proven replacement by NH4 +of K+ in the activation of the Na,KATPase and of Na+ in the activation of the ouabain-insensitive ATPase displayed similar enzyme affinity for the substituted cation. on the one hand this finding may represent one of the possible mechanisms of ammonia toxicity and, on the other, it supports the hypothesis that NH4 + can be transported across the plasma membrane through the two ATPases. In this case both microsomal ATPases may be involved and co-operate, at least under peculiar circumstances, to nitrogen excretion and ammonia detoxification mechanisms in bivalve molluscs. The two ATPase activities stimulated by NH4 + maintained their typical response to the glycoside ouabain, specific inhibitor of the Na,K-ATPase, being the Na++ NH4 +-activated ATPase even more susceptive to the inhibitor and the ouabain-insensitive ATPase activity activated indifferently by Na+ or NH4 + unaffected by up to 10-2 M ouabain. In vitro assays were carried out to evaluate the response of the two Na-dependent ATPases to organotins in clams and mussels and to investigate the interaction of TBT with mussel mitochondrial oligomycin-sensitive Mg-ATPase. Since no literature data were available, the optimal assay conditions and oligomycin sensitivity of mussel mitochondrial MgATPase were determined. In T. philippinarum the ouabain-insensitive Na-ATPase was found to be refractory to TBT both in the gills and in the mantle, whereas the Na,K-ATPase was progressively inhibited by increasing TBT doses; the enzyme inhibition was more pronounced in the gills than in the mantle. In both tissues of M. galloprovincialis the Na,K-ATPase inhibition by alkyltins decreased in the order TBT>DBT(dibutyltin)>>MBT(monobutyltin)=TeET(tetraethyltin) (no effect). Mussel Na-ATPase confirmed its refractorimess to TBT and derivatives both in the gills and in the mantle. These results indicate that the Na,K-ATPase inhibition decreases as the number of alkyl chains bound to tin decreases; however a certain polarity of the organotin molecule is required to yield Na,K-ATPase inhibition, since no enzyme inhibition occurred in the presence of tetraalkyl-substituted derivatives such as TeET . Assays carried out in the presence of the dithioerythritol (DTE) pointed out that the sulphhydrylic agent is capable to prevent the Na,K-ATPase inhibition by TBT, thus suggesting that the inhibitor may link to -SH groups of the enzyme complex.. Finally, the different effect of alkyltins on the two Na-dependent ATPases may constitute a further tool to differentiate between the two enzyme activities. These results add to the wealth of literature data describing different responses of the two enzyme activities to endogenous and exogenous modulators . Mussel mitochondrial Mg-ATPase was also found to be in vitro inhibited by TBT both in the gills and in the mantle: the enzyme inhibition followed non competitive kinetics. The failed effect of DTE pointed out that in this case the interaction of TBT with the enzyme complex is probably different from that with the Na,K-ATPase. The results are consistent with literature data showing that alkyltin may interact with enzyme structures with different mechanisms. Mussel exposure to different TBT sublethal doses in aquaria was carried out for 120 hours. Two samplings (after 24 and 120 hrs) were performed in order to evaluate a short-term response of gill and mantle Na,K-ATPase, ouabain-insensitive Na-ATPase and Mg-ATPase activities. The in vivo response to the contaminants of the enzyme activities under study was shown to be partially different from that pointed out in the in vitro assays. Mitochondrial Mg-ATPase activity appeared to be activated in TBTexposed mussels with respect to control ones, thus confirming the complexity of evaluating in vivo responses of the enzyme activities to contaminants, due to possible interactions of toxicants with molluscan metabolism. Concluding, the whole of data point out that microsomal and mitochondrial ATPase activities of bivalve molluscs are generally responsive to environmental contaminants and suggest that in some cases membrane-bound enzyme activities may represent the molecular target of their toxicity. Since the Na,K-ATPase, the Na-ATPase and the Mg-ATPase activities are poorly studied in marine bivalves, this research may contribute to enlarge knowledge in this quite unexplored field.
Resumo:
[EN ]The classical optimal (in the Frobenius sense) diagonal preconditioner for large sparse linear systems Ax = b is generalized and improved. The new proposed approximate inverse preconditioner N is based on the minimization of the Frobenius norm of the residual matrix AM − I, where M runs over a certain linear subspace of n × n real matrices, defined by a prescribed sparsity pattern. The number of nonzero entries of the n×n preconditioning matrix N is less than or equal to 2n, and n of them are selected as the optimal positions in each of the n columns of matrix N. All theoretical results are justified in detail…
Resumo:
Un livello di sicurezza che prevede l’autenticazione e autorizzazione di un utente e che permette di tenere traccia di tutte le operazioni effettuate, non esclude una rete dall’essere soggetta a incidenti informatici, che possono derivare da tentativi di accesso agli host tramite innalzamento illecito di privilegi o dai classici programmi malevoli come virus, trojan e worm. Un rimedio per identificare eventuali minacce prevede l’utilizzo di un dispositivo IDS (Intrusion Detection System) con il compito di analizzare il traffico e confrontarlo con una serie d’impronte che fanno riferimento a scenari d’intrusioni conosciute. Anche con elevate capacità di elaborazione dell’hardware, le risorse potrebbero non essere sufficienti a garantire un corretto funzionamento del servizio sull’intero traffico che attraversa una rete. L'obiettivo di questa tesi consiste nella creazione di un’applicazione con lo scopo di eseguire un’analisi preventiva, in modo da alleggerire la mole di dati da sottoporre all’IDS nella fase di scansione vera e propria del traffico. Per fare questo vengono sfruttate le statistiche calcolate su dei dati forniti direttamente dagli apparati di rete, cercando di identificare del traffico che utilizza dei protocolli noti e quindi giudicabile non pericoloso con una buona probabilità.
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.
Resumo:
This dissertation analyzes the effect of market analysts’ expectations of share prices (price targets) on executive compensation. It examines how well the estimated effects of price targets on compensation fit with two competing views on determining executive compensation: the arm’s length bargaining model, which assumes that a board seeks to maximize shareholders’ interests, and the managerial power model, which assumes that a board seeks to maximize managers’ compensation (Bebchuk et al. 2005). The first chapter documents the pattern of CEO pay from fiscal year 1996 to 2010. The second chapter analyzes the Institutional Broker Estimate System Detail History Price Target data file, which that reports analysts’ price targets for firms. I show that the number of price target announcements is positively associated with company share price’s volatility, that price targets are predictive of changes in the value of stocks, and that when analysts announce positive (negative) expectations of future stock price, share prices change in the same direction in the short run. The third chapter analyzes the effect of price targets on executive compensation. I find that analysts' price targets alter the composition of executive pay between cash-based compensation and stock-based compensation. When analysts forecast a rise (fall) in the share price for a firm, the compensation package tilts toward stock-based (cash-based) compensation. The substitution effect is stronger in companies that have weaker corporate governance. The fourth chapter explores the effect of the introduction of the Sarbanes-Oxley Act (SOX) in 2002 and its reinforcement in 2006 on the options granting process. I show that the introduction of SOX and its reinforcement eliminated the practice of backdating options but increased “spring-loading” of option grants around price targets announcements. Overall, the dissertation shows that price targets provide insights into the determinants of executive pay in favor of the managerial power model.