1000 resultados para Filter-function
Resumo:
The power generated by large grid-connected photovoltaic (PV) plants depends greatly on the solar irradiance. This paper studies the effects of the solar irradiance variability analyzing experimental 1-s data collected throughout a year at six PV plants, totaling 18 MWp. Each PV plant was modeled as a first order filter function based on an analysis in the frequency domain of the irradiance data and the output power signals. An empiric expression which relates the filter parameters and the PV plant size has been proposed. This simple model has been successfully validated precisely determining the daily maximum output power fluctuation from incident irradiance measurements.
Resumo:
Dans ce travail, j’étudie les relations entre la blogosphère politique dans la ville de São Paulo, au Brésil, et l’ensemble de l’écologie médiatique dans laquelle celle-ci est insérée. En établissant un dialogue avec les théories qui posent la dissémination des blogues comme le moment de répartition des moyens de production et de démocratisation de parole, je propose que la blogosphère doit plutôt être envisagée comme une instance de filtrage des produits des médias de masse. J’emprunte le concept de « dispositif » à Michel Foucault et à Giorgio Agamben, pour définir les médias du monde contemporain selon les termes d’un dispositif médiatique. J’emprunte aussi les concepts de « two-step flow of communications » et de « leader d’opinion » à Paul F. Lazarsfeld et Elihu Katz pour localiser la blogosphère dans l’ensemble de notre espace médiatique. Je défends également l’idée que les blogueurs exercent aujourd’hui une fonction semblable à celle des leaders d’opinion que décrivent Katz et Lazarsfeld – ce que je nomme la fonction-filtre de la blogosphère - et que ces blogueurs se situent dans la couche intermédiaire du flux d’informations dans les démocraties occidentales, c'est-à-dire celle qui se trouve entre les médias de masse et les récepteurs. Pour conduire ma recherche, j’ai réalisé une ethnographie virtuelle auprès des blogueurs de la ville de São Paulo, au Brésil, pendant la période de la campagne électorale de 2008 à la mairie. Ensuite, j’ai soumis l’ensemble de leur production discursive à une analyse sociolinguistique. Et je conclus que plutôt qu’étant le fruit d’une révolution, l’écologie médiatique contemporaine reprend - en les diversifiant et en les étendant - des processus que l’on ne pensait propres qu’aux dynamiques des médias de masse.
Resumo:
The search for Earth-like exoplanets, orbiting in the habitable zone of stars other than our Sun and showing biological activity, is one of the most exciting and challenging quests of the present time. Nulling interferometry from space, in the thermal infrared, appears as a promising candidate technique for the task of directly observing extra-solar planets. It has been studied for about 10 years by ESA and NASA in the framework of the Darwin and TPF-I missions respectively. Nevertheless, nulling interferometry in the thermal infrared remains a technological challenge at several levels. Among them, the development of the "modal filter" function is mandatory for the filtering of the wavefronts in adequacy with the objective of rejecting the central star flux to an efficiency of about 105. Modal filtering takes benefit of the capability of single-mode waveguides to transmit a single amplitude function, to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible. The modal filter may either be based on single-mode Integrated Optics (IO) and/or Fiber Optics. In this paper, we focus on IO, and more specifically on the progress of the on-going "Integrated Optics" activity of the European Space Agency.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Purpose: There is no consensus on the optimal method to measure delivered dialysis dose in patients with acute kidney injury (AKI). The use of direct dialysate-side quantification of dose in preference to the use of formal blood-based urea kinetic modeling and simplified blood urea nitrogen (BUN) methods has been recommended for dose assessment in critically-ill patients with AKI. We evaluate six different blood-side and dialysate-side methods for dose quantification. Methods: We examined data from 52 critically-ill patients with AKI requiring dialysis. All patients were treated with pre-dilution CWHDF and regional citrate anticoagulation. Delivered dose was calculated using blood-side and dialysis-side kinetics. Filter function was assessed during the entire course of therapy by calculating BUN to dialysis fluid urea nitrogen (FUN) ratios q/12 hours. Results: Median daily treatment time was 1,413 min (1,260-1,440). The median observed effluent volume per treatment was 2,355 mL/h (2,060-2,863) (p<0.001). Urea mass removal rate was 13.0 +/- 7.6 mg/min. Both EKR (r(2)=0.250; p<0.001) and K-D (r(2)=0.409; p<0.001) showed a good correlation with actual solute removal. EKR and K-D presented a decline in their values that was related to the decrease in filter function assessed by the FUN/BUN ratio. Conclusions: Effluent rate (ml/kg/h) can only empirically provide an estimated of dose in CRRT. For clinical practice, we recommend that the delivered dose should be measured and expressed as K-D. EKR also constitutes a good method for dose comparisons over time and across modalities.
Resumo:
We compare the performance of two different low-storage filter diagonalisation (LSFD) strategies in the calculation of complex resonance energies of the HO2, radical. The first is carried out within a complex-symmetric Lanczos subspace representation [H. Zhang, S.C. Smith, Phys. Chem. Chem. Phys. 3 (2001) 2281]. The second involves harmonic inversion of a real autocorrelation function obtained via a damped Chebychev recursion [V.A. Mandelshtam, H.S. Taylor, J. Chem. Phys. 107 (1997) 6756]. We find that while the Chebychev approach has the advantage of utilizing real algebra in the time-consuming process of generating the vector recursion, the Lanczos, method (using complex vectors) requires fewer iterations, especially for low-energy part of the spectrum. The overall efficiency in calculating resonances for these two methods is comparable for this challenging system. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Several suction–water-content (s-w) calibrations for the filter paper method (FPM) used for soil-suction measurement have been published. Most of the calibrations involve a bilinear function (i.e., two different equations) with an inflection point occurring at 60 kPa
Resumo:
Resonance phenomena associated with the unimolecular dissociation of HO2 have been investigated quantum-mechanically by the Lanczos homogeneous filter diagonalization (LHFD) method. The calculated resonance energies, rates (widths), and product state distributions are compared to results from an autocorrelation function-based filter diagonalization (ACFFD) method. For calculating resonance wave functions via ACFFD, an analytical expression for the expansion coefficients of the modified Chebyshev polynomials is introduced. Both dissociation rates and product state distributions of O-2 show strong fluctuations, indicating the dissociation of HO2 is essentially irregular. (C) 2001 American Institute of Physics.
Resumo:
A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). Methods: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. Results: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. Conclusion: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
Resumo:
Amorphous SiC tandem heterostructures are used to filter a specific band, in the visible range. Experimental and simulated results are compared to validate the use of SiC multilayered structures in applications where gain compensation is needed or to attenuate unwanted wavelengths. Spectral response data acquired under different frequencies, optical wavelength control and side irradiations are analyzed. Transfer function characteristics are discussed. Color pulsed communication channels are transmitted together and the output signal analyzed under different background conditions. Results show that under controlled wavelength backgrounds, the device sensitivity is enhanced in a precise wavelength range and quenched in the others, tuning or suppressing a specific band. Depending on the background wavelength and irradiation side, the device acts either as a long-, a short-, or a band-rejection pass filter. An optoelectronic model supports the experimental results and gives insight on the physics of the device.
Resumo:
The purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.
Resumo:
A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.
Resumo:
Purpose: To determine the relationship of goblet cell density (GCD) with tear function and ocular surface physiology. Methods: This was a cross-sectional study conducted in 35 asymptomatic subjects with mean age 23.8±3.6 years. Tear film assessment, conjunctiva and cornea examination were done in each subject. Conjunctival impression cytology was performed by applying Nitrocellulose Millipore MFTM-Membrane filter over the superior bulbar conjunctiva. The filter paper was than fixed with 96% ethanol and stained with Periodic Acid Schiff, Hematoxylin and Eosin. GCD was determined by optical microscopy. Relation between GCD and Schirmer score, tear break-up time (TBUT), bulbar redness, limbal redness and corneal staining was determined. Results: The mean GCD was 151±122 cells/mm2. GCD was found higher in eyes with higher Schirmer score but it was not significant (p = 0.75). There was a significant relationship ofGCDwith TBUT (p = 0.042). GCD was not correlated with bulbar redness (p = 0.126), and limbal redness (p = 0.054) as well as corneal staining (p = 0.079). No relationship of GCD with age and gender of the subjects (p > 0.05) was observed. Conclusion: GCD was found correlated with TBUT but no significant correlation was found with the aqueous portion of the tear, limbal as well as bulbar redness and corneal staining.