1000 resultados para Filter-function
Resumo:
Aims - To compare reading performance in children with and without visual function anomalies and identify the influence of abnormal visual function and other variables in reading ability. Methods - A cross-sectional study was carried in 110 children of school age (6-11 years) with Abnormal Visual Function (AVF) and 562 children with Normal Visual Function (NVF). An orthoptic assessment (visual acuity, ocular alignment, near point of convergence and accommodation, stereopsis and vergences) and autorefraction was carried out. Oral reading was analyzed (list of 34 words). Number of errors, accuracy (percentage of success) and reading speed (words per minute - wpm) were used as reading indicators. Sociodemographic information from parents (n=670) and teachers (n=34) was obtained. Results - Children with AVF had a higher number of errors (AVF=3.00 errors; NVF=1.00 errors; p<0.001), a lower accuracy (AVF=91.18%; NVF=97.06%; p<0.001) and reading speed (AVF=24.71 wpm; NVF=27.39 wpm; p=0.007). Reading speed in the 3rd school grade was not statistically different between the two groups (AVF=31.41 wpm; NVF=32.54 wpm; p=0.113). Children with uncorrected hyperopia (p=0.003) and astigmatism (p=0.019) had worst reading performance. Children in 2nd, 3rd, or 4th grades presented a lower risk of having reading impairment when compared with the 1st grade. Conclusion - Children with AVF had reading impairment in the first school grade. It seems that reading abilities have a wide variation and this disparity lessens in older children. The slow reading characteristics of the children with AVF are similar to dyslexic children, which suggest the need for an eye evaluation before classifying the children as dyslexic.
Resumo:
Radio Link Quality Estimation (LQE) is a fundamental building block for Wireless Sensor Networks, namely for a reliable deployment, resource management and routing. Existing LQEs (e.g. PRR, ETX, Fourbit, and LQI ) are based on a single link property, thus leading to inaccurate estimation. In this paper, we propose F-LQE, that estimates link quality on the basis of four link quality properties: packet delivery, asymmetry, stability, and channel quality. Each of these properties is defined in linguistic terms, the natural language of Fuzzy Logic. The overall quality of the link is specified as a fuzzy rule whose evaluation returns the membership of the link in the fuzzy subset of good links. Values of the membership function are smoothed using EWMA filter to improve stability. An extensive experimental analysis shows that F-LQE outperforms existing estimators.
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
This paper proposes a Genetic Algorithm (GA) for the design of combinational logic circuits. The fitness function evaluation is calculated using Fractional Calculus. This approach extends the classical fitness function by including a fractional-order dynamical evaluation. The experiments reveal superior results when comparing with the classical method.
Resumo:
Characteristics of tunable wavelength pi'n/pin filters based on a-SiC:H multilayered stacked cells are studied both experimentally and theoretically. Results show that the device combines the demultiplexing operation with the simultaneous photodetection and self amplification of the signal. An algorithm to decode the multiplex signal is established. A capacitive active band-pass filter model is presented and supported by an electrical simulation of the state variable filter circuit. Experimental and simulated results show that the device acts as a state variable filter. It combines the properties of active high-pass and low-pass filter sections into a capacitive active band-pass filter using a changing capacitance to control the power delivered to the load.
Resumo:
The behavior of tandem pin heterojunctions based on a-SiC: H alloys is investigated under different optical and electrical bias conditions. The devices are optimized to act as optically selective wavelength filters. Depending on the device configuration (optical gaps, thickness, sequence of cells in the stack structure) and on the applied voltage (positive or negative) and optical bias (wavelength, intensity, frequency) it is possible to combine the wavelength discrimination function with the self amplification of the signal. This wavelength nonlinearity allows the amplification or the rejection of a weak signal-impulse. The device works as an active tunable optical filter for wavelength selection and can be used as an add/drop multiplexer (ADM) which enables data to enter and leave an optical network bit stream without having to demultiplex the stream. Results show that, even under weak transient input signals, the background wavelength controls the output signal. This nonlinearity, due to the transient asymmetrical light penetration of the input channels across the device together with the modification on the electrical field profile due to the optical bias, allows tuning an input channel without demultiplexing the stream. This high optical nonlinearity makes the optimized devices attractive for the amplification of all optical signals. Transfer characteristics effects due to changes in steady state light, control d.c. voltage and applied light pulses are presented. Based on the experimental results and device configuration an optoelectronic model is developed. The transfer characteristics effects due to changes in steady state light, dc control voltage or applied light pulses are simulated and compared with the experimental data. A good agreement was achieved.
Resumo:
This paper studies the describing function (DF) of systems consisting in a mass subjected to nonlinear friction. The friction force is composed in three components namely, the viscous, the Coulomb and the static forces. The system dynamics is analyzed in the DF perspective revealing a fractional-order behaviour. The reliability of the DF method is evaluated through the signal harmonic content and the limit cycle prediction.
Resumo:
New sensory materials based on p-phenylene ethynylene trimers integrating calix[4]arene receptors (CALIX-PET) and tert-butylphenol (TBP-PET) moieties have been synthesized and their sensitivity and selectivity for the detection of nitroaromatic compounds (NACs) such as nitrobenzene (NB), 2,4-dinitrotoluene (2,4-DNT), 2,4,6-trinitrotoluene (TNT) and picric acid (PA) investigated in fluid phase and solid-state. It was found that both fluorophores displayed high sensitivities toward NACs detection in solution as evaluated by the Stern-Volmer formalism. For all the tested explosives, the ratio of fluorescence intensities (F-0/F) is a linear function of the quencher concentration only after appropriate correction of fluorescence quenching data for inner-filter effects. The quenching efficiencies for CALIX-PET and TBP-PET follow the order PA >> TNT > DNT > NB, which correlate well with the quenchers electron affinities as evaluated from their LUMOs energies thereby suggesting a photoinduced electron transfer as the dominant mechanism of fluorescence quenching. The selectivity of these sensors was checked against exemplar interferents possessing differentiated electronic properties (benzoic acid, 2,4-dichlorophenol and benzoquinone) and reduced quenching activity was detected. The quenching efficiencies and response times of the two fluorophores in the solid-state toward NB, 2,4-DNT and TNT vapors were evaluated through steady-state fluorescence quenching experiments with the materials dispersed in polymeric matrices or as neat films. The most significant fluorescence quenching responses were achieved for drop-casted films of TBP-PET upon exposure to nitroaromatics.
Resumo:
This paper analyzes the dynamical properties of systems with backlash and impact phenomena based on the describing function method. It is shown that this type of nonlinearity can be analyzed in the perspective of the fractional calculus theory. The fractional dynamics is compared with that of standard models.
Resumo:
This paper studies the describing function (DF) of systems constituted by a mass subjected to nonlinear friction. The friction force is decomposed into two components, namely, the viscous and the Coulomb friction. The system dynamics is analyzed in the DF perspective revealing a fractional-order behavior. The reliability of the DF method is evaluated through the signal harmonic contents.
Resumo:
Background - Medical image perception research relies on visual data to study the diagnostic relationship between observers and medical images. A consistent method to assess visual function for participants in medical imaging research has not been developed and represents a significant gap in existing research. Methods - Three visual assessment factors appropriate to observer studies were identified: visual acuity, contrast sensitivity, and stereopsis. A test was designed for each, and 30 radiography observers (mean age 31.6 years) participated in each test. Results - Mean binocular visual acuity for distance was 20/14 for all observers. The difference between observers who did and did not use corrective lenses was not statistically significant (P = .12). All subjects had a normal value for near visual acuity and stereoacuity. Contrast sensitivity was better than population norms. Conclusion - All observers had normal visual function and could participate in medical imaging visual analysis studies. Protocols of evaluation and populations norms are provided. Further studies are necessary to understand fully the relationship between visual performance on tests and diagnostic accuracy in practice.