80 resultados para input parameter value recommendation
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
OBJETIVO: Analisar a acurácia do diagnóstico de dois protocolos de imunofluorescência indireta para leishmaniose visceral canina. MÉTODOS: Cães provenientes de inquérito soroepidemiológico realizado em área endêmica nos municípios de Araçatuba e de Andradina, na região noroeste do estado de São Paulo, em 2003, e área não endêmica da região metropolitana de São Paulo, foram utilizados para avaliar comparativamente dois protocolos da reação de imunofluorescência indireta (RIFI) para leishmaniose: um utilizando antígeno heterólogo Leishmania major (RIFI-BM) e outro utilizando antígeno homólogo Leishmania chagasi (RIFI-CH). Para estimar acurácia utilizou-se a análise two-graph receiver operating characteristic (TG-ROC). A análise TG-ROC comparou as leituras da diluição 1:20 do antígeno homólogo (RIFI-CH), consideradas como teste referência, com as diluições da RIFI-BM (antígeno heterólogo). RESULTADOS: A diluição 1:20 do teste RIFI-CH apresentou o melhor coeficiente de contingência (0,755) e a maior força de associação entre as duas variáveis estudadas (qui-quadrado=124,3), sendo considerada a diluição-referência do teste nas comparações com as diferentes diluições do teste RIFI-BM. Os melhores resultados do RIFI-BM foram obtidos na diluição 1:40, com melhor coeficiente de contingência (0,680) e maior força de associação (qui-quadrado=80,8). Com a mudança do ponto de corte sugerido nesta análise para a diluição 1:40 da RIFI-BM, o valor do parâmetro especificidade aumentou de 57,5% para 97,7%, embora a diluição 1:80 tivesse apresentado a melhor estimativa para sensibilidade (80,2%) com o novo ponto de corte. CONCLUSÕES: A análise TG-ROC pode fornecer importantes informações sobre os testes de diagnósticos, além de apresentar sugestões sobre pontos de cortes que podem melhorar as estimativas de sensibilidade e especificidade do teste, e avaliá-los a luz do melhor custo-benefício.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
An investigation has been performed on the effect of liquid phase recirculation velocity and increasing influent concentration on the stability and efficiency of an anaerobic sequencing batch reactor (ASBR) containing granular biomass. The reactor treated 1.3 L synthetic wastewater at 30 degrees C in 6 h cycles. Initially the effect of recirculation velocity was investigated employing velocities of 5, 7 and 10 m/h and influent concentration of 500 mg COD/L. At these velocities, filtered sample organic matter removal efficiencies were 83, 85 and 84%, respectively. A first order kinetic model could also be fitted to the experimental organic matter concentration profiles. The kinetic parameter values of this model were 1.35, 2.36 and 1.00 h(-1) at the recirculation velocities of 5, 7 and 10 m/h, respectively. The recirculation velocity of 7 m/h was found to be the best operating strategy and this value was maintained while the influent concentration was altered in order to verify system efficiency and stability at increasing organic load. Influent concentration of 1000 mg COD/L resulted in filtered sample organic matter removal efficiency of 80%, and a first order kinetic parameter value of 1.14 h(-1), whereas the concentration of 1500 mg COD/L resulted in an efficiency of 82% and a kinetic parameter value of 1.31 h(-1). (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we introduce a method to conclude about the existence of secondary bifurcations or isolas of steady state solutions for parameter dependent nonlinear partial differential equations. The technique combines the Global Bifurcation Theorem, knowledge about the non-existence of nontrivial steady state solutions at the zero parameter value and explicit information about the coexistence of multiple nontrivial steady states at a positive parameter value. We apply the method to the two-dimensional Swift-Hohenberg equation. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work reports the energy transfer mechanism process of [Eu(TTA)(2)(NO(3))(TPPO)(2)] (bis-TTA complex) and [Eu(TTA)(3)(TPPO)(2)] (tris-TTA complex) based on experimental and theoretical spectroscopic properties, where TTA = 2-thienoyltrifluoroacetone and TPPO = triphenylphosphine oxide. These complexes were synthesized and characterized by elemental analyses, infrared spectroscopy and thermogavimetric analysis. The theoretical complexes geometry data by using Sparkle model for the calculation of lanthanide complexes (SMLC) is in agreement with the crystalline structure determined by single-crystal X-ray diffraction analysis. The emission spectra for [Gd(TTA)(3)(TPPO)(2)] and [Gd(TTA)(2) (NO(3))(TPPO)(2)] complexes are associated to T -> S(0) transitions centered on coordinated TTA ligands. Experimental luminescent properties of the bis-TTA complex have been quantified through emission intensity parameters Omega(lambda)(lambda = 2 and 4), spontaneous emission rates (A(rad)), luminescence lifetime (tau), emission quantum efficiency (eta) and emission quantum yield (q), which were compared with those for tris-TTA complex. The experimental data showed that the intensity parameter value for bis-TTA complex is twice smaller than the one for tris-TTA complex, indicating the less polarizable chemical environment in the system containing nitrate ion. A good agreement between the theoretical and experimental quantum yields for both Eu(Ill) complexes was obtained. The triboluminescence (TL) of the [Eu(TTA)(2)(NO(3))(TPPO)(2)] complexes are discussed in terms of ligand-to-metal energy transfer. (c) 2007 Elsevier B.V. All fights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
In this paper we consider the case of a Bose gas in low dimension in order to illustrate the applicability of a method that allows us to construct analytical relations, valid for a broad range of coupling parameters, for a function which asymptotic expansions are known. The method is well suitable to investigate the problem of stability of a collection of Bose particles trapped in one- dimensional configuration for the case where the scattering length presents a negative value. The eigenvalues for this interacting quantum one-dimensional many particle system become negative when the interactions overcome the trapping energy and, in this case, the system becomes unstable. Here we calculate the critical coupling parameter and apply for the case of Lithium atoms obtaining the critical number of particles for the limit of stability.
Resumo:
Dental roots that have been exposed to the oral cavity and periodontal pocket environment present superficial changes, which can prevent connective tissue reattachment. Demineralizing agents have been used as an adjunct to the periodontal treatment aiming at restoring the biocompatibility of roots. OBJECTIVE: This study compared four commonly used demineralizing agents for their capacity of removing smear layer and opening dentin tubules. METHODS: Fifty fragments of human dental roots previously exposed to periodontal disease were scaled and randomly divided into the following groups of treatment: 1) CA: demineralization with citric acid for 3 min; 2) TC-HCl: demineralization with tetracycline-HCl for 3 min; 3) EDTA: demineralization with EDTA for 3 min; 4) PA: demineralization with 37% phosphoric acid for 3 min; 5) Control: rubbing of saline solution for 3 min. Scanning electron microscopy was used to check for the presence of residual smear layer and for measuring the number and area of exposed dentin tubules. RESULTS: Smear layer was present in 100% of the specimens from the groups PA and control; in 80% from EDTA group; in 33.3% from TC-HCl group and 0% from CA group. The mean numbers of exposed dentin tubules in a standardized area were: TC-HCl=43.8±25.2; CA=39.3±37; PA=12.1±16.3; EDTA=4.4±7.5 and Control=2.3±5.7. The comparison showed significant differences between the following pairs of groups: TC-HCl and Control; TC-HCl and EDTA; CA and Control; and CA and EDTA. The mean percentages of area occupied by exposed dentin tubules were: CA=0.12±0.17%; TC-HCl=0.08±0.06%; PA=0.03±0.05%; EDTA=0.01±0.01% and Control=0±0%. The CA group differed significantly from the others except for the TC-HCl group. CONCLUSION: There was a decreasing ability for smear layer removal and dentin tubule widening as follows: AC>TC-HCl>PA>EDTA. This information can be of value as an extra parameter for choosing one of them for root conditioning.
Resumo:
The aim of this study was to evaluate the viability of the use of spent laying hens' meat in the manufacturing of mortadella-type sausages with healthy appeal by using vegetable oil instead of animal fat. 120 Hy-line® layer hens were distributed in a completely randomized design into two treatments of six replicates with ten birds each. The treatments were birds from light Hy-line® W36 and semi-heavy Hy-line® Brown lines. Cold carcass, wing, breast and leg fillets yields were determined. Dry matter, protein, and lipid contents were determined in breast and leg fillets. The breast and legg fillets of three replicates per treatment were used to manufacture mortadella. After processing, sausages were evaluated for proximal composition, objective color, microbiological parameters, fatty acid profile and sensory acceptance. The meat of light and semi-heavy spent hens presented good yield and composition, allowing it to be used as raw material for the manufacture of processed products. Mortadellas were safe from microbiological point of view, and those made with semi-heavy hens fillets were redder and better accepted by consumers. Values for all sensory attributes were evaluated over score 5 (neither liked nor disliked). Both products presented high polyunsaturated fatty acid contents and good polyunsaturated to saturated fatty acid ratio. The excellent potential for the use of meat from spent layer hens of both varieties in the manufacturing of healthier mortadella-type sausage was demonstrated.
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
In this work, the effects of conical indentation variables on the load-depth indentation curves were analyzed using finite element modeling and dimensional analysis. A factorial design 2(6) was used with the aim of quantifying the effects of the mechanical properties of the indented material and of the indenter geometry. Analysis was based on the input variables Y/E, R/h(max), n, theta, E, and h(max). The dimensional variables E and h(max) were used such that each value of dimensionless Y/E was obtained with two different values of E and each value of dimensionless R/h(max) was obtained with two different h(max) values. A set of dimensionless functions was defined to analyze the effect of the input variables: Pi(1) = P(1)/Eh(2), Pi(2) = h(c)/h, Pi(3) = H/Y, Pi(4) = S/Eh(max), Pi(6) = h(max)/h(f) and Pi(7) = W(P)/W(T). These six functions were found to depend only on the dimensionless variables studied (Y/E, R/h(max), n, theta). Another dimension less function, Pi(5) = beta, was not well defined for most of the dimensionless variables and the only variable that provided a significant effect on beta was theta. However, beta showed a strong dependence on the fraction of the data selected to fit the unloading curve, which means that beta is especially Susceptible to the error in the Calculation of the initial unloading slope.
Resumo:
This work develops a method for solving ordinary differential equations, that is, initial-value problems, with solutions approximated by using Legendre's polynomials. An iterative procedure for the adjustment of the polynomial coefficients is developed, based on the genetic algorithm. This procedure is applied to several examples providing comparisons between its results and the best polynomial fitting when numerical solutions by the traditional Runge-Kutta or Adams methods are available. The resulting algorithm provides reliable solutions even if the numerical solutions are not available, that is, when the mass matrix is singular or the equation produces unstable running processes.
Nutritive value and physical characteristics of Xaraes palisadegrass as affected by grazing strategy
Resumo:
The aim of this study was to ascertain whether the defoliation frequency based on a fixed rest period would generate variable sward structural and physiological conditions at each subsequent grazing event. The relative importance of the physiological age was established in comparison with the chronological age in the determination of the forage nutritive value of Xaraes palisadegrass [Brachiaria brizantha (Hochst ex A. RICH.) STAPF. cv. Xaraes]. Two grazing frequencies were defined by light interception (LI) at initiation of grazing (95% LI - ""target grazing"" [TG] or 100% LI - ""delayed grazing"" [DG]) and one based on chronological time, grazing every 28 days (28-d). Forage produced under the TG schedule was mostly leaves (93%) with a higher concentration of crude protein (CP; 138 g/kg in the whole forage), a lower concentrations of neutral detergent fibre (NDF) in the stems (740 g/kg), and higher in vitro dry matter digestibility (IVDMD) of the leaves (690 g/kg), compared to the other treatments. Lower grazing frequency strategies (DG and 28-d) resulted in forage with higher proportions of stems (10 and 9%, respectively). Strategies based on light interception did not produce pre-graze forage with a uniform nutritive value, as the indicators varied across grazing cycles. The treatment based on fixed days of rest did not result in uniformity.