1000 resultados para DIAGNOSIS SCHEME
Resumo:
We have performed ab initio molecular dynamics simulations to generate an atomic structure model of amorphous hafnium oxide (a-HfO(2)) via a melt-and-quench scheme. This structure is analyzed via bond-angle and partial pair distribution functions. These results give a Hf-O average nearest-neighbor distance of 2.2 angstrom, which should be compared to the bulk value, which ranges from 1.96 to 2.54 angstrom. We have also investigated the neutral O vacancy and a substitutional Si impurity for various sites, as well as the amorphous phase of Hf(1-x)Si(x)O(2) for x=0.25, 0375, and 0.5.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a novel adaptive control scheme. with improved convergence rate, for the equalization of harmonic disturbances such as engine noise. First, modifications for improving convergence speed of the standard filtered-X LMS control are described. Equalization capabilities are then implemented, allowing the independent tuning of harmonics. Eventually, by providing the desired order vs. engine speed profiles, the pursued sound quality attributes can be achieved. The proposed control scheme is first demonstrated with a simple secondary path model and, then, experimentally validated with the aid of a vehicle mockup which is excited with engine noise. The engine excitation is provided by a real-time sound quality equivalent engine simulator. Stationary and transient engine excitations are used to assess the control performance. The results reveal that the proposed controller is capable of large order-level reductions (up to 30 dB) for stationary excitation, which allows a comfortable margin for equalization. The same holds for slow run-ups ( > 15s) thanks to the improved convergence rate. This margin, however, gets narrower with shorter run-ups (<= 10s). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, the method of Galerkin and the Askey-Wiener scheme are used to obtain approximate solutions to the stochastic displacement response of Kirchhoff plates with uncertain parameters. Theoretical and numerical results are presented. The Lax-Milgram lemma is used to express the conditions for existence and uniqueness of the solution. Uncertainties in plate and foundation stiffness are modeled by respecting these conditions, hence using Legendre polynomials indexed in uniform random variables. The space of approximate solutions is built using results of density between the space of continuous functions and Sobolev spaces. Approximate Galerkin solutions are compared with results of Monte Carlo simulation, in terms of first and second order moments and in terms of histograms of the displacement response. Numerical results for two example problems show very fast convergence to the exact solution, at excellent accuracies. The Askey-Wiener Galerkin scheme developed herein is able to reproduce the histogram of the displacement response. The scheme is shown to be a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
The present paper proposes a flexible consensus scheme for group decision making, which allows one to obtain a consistent collective opinion, from information provided by each expert in terms of multigranular fuzzy estimates. It is based on a linguistic hierarchical model with multigranular sets of linguistic terms, and the choice of the most suitable set is a prerogative of each expert. From the human viewpoint, using such model is advantageous, since it permits each expert to utilize linguistic terms that reflect more adequately the level of uncertainty intrinsic to his evaluation. From the operational viewpoint, the advantage of using such model lies in the fact that it allows one to express the linguistic information in a unique domain, without losses of information, during the discussion process. The proposed consensus scheme supposes that the moderator can interfere in the discussion process in different ways. The intervention can be a request to any expert to update his opinion or can be the adjustment of the weight of each expert`s opinion. An optimal adjustment can be achieved through the execution of an optimization procedure that searches for the weights that maximize a corresponding soft consensus index. In order to demonstrate the usefulness of the presented consensus scheme, a technique for multicriteria analysis, based on fuzzy preference relation modeling, is utilized for solving a hypothetical enterprise strategy planning problem, generated with the use of the Balanced Scorecard methodology. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a study of the stationary phenomenon of superheated or metastable liquid jets, flashing into a two-dimensional axisymmetric domain, while in the two-phase region. In general, the phenomenon starts off when a high-pressure, high-temperature liquid jet emerges from a small nozzle or orifice expanding into a low-pressure chamber, below its saturation pressure taken at the injection temperature. As the process evolves, crossing the saturation curve, one observes that the fluid remains in the liquid phase reaching a superheated condition. Then, the liquid undergoes an abrupt phase change by means of an oblique evaporation wave. Across this phase change the superheated liquid becomes a two-phase high-speed mixture in various directions, expanding to supersonic velocities. In order to reach the downstream pressure, the supersonic fluid continues to expand, crossing a complex bow shock wave. The balance equations that govern the phenomenon are mass conservation, momentum conservation, and energy conservation, plus an equation-of-state for the substance. A false-transient model is implemented using the shock capturing scheme: dispersion-controlled dissipative (DCD), which was used to calculate the flow conditions as the steady-state condition is reached. Numerical results with computational code DCD-2D vI have been analyzed. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.
Resumo:
A thermodynamic information system for diagnosis and prognosis of an existing power plant was developed. The system is based on an analytic approach that informs the current thermodynamic condition of all cycle components, as well as the improvement that can be obtained in the cycle performance by the elimination of the discovered anomalies. The effects induced by components anomalies and repairs in other components efficiency, which have proven to be one of the main drawbacks in the diagnosis and prognosis analyses, are taken into consideration owing to the use of performance curves and corrected performance curves together with the thermodynamic data collected from the distributed control system. The approach used to develop the system is explained, the system implementation in a real gas turbine cogeneration combined cycle is described and the results are discussed. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We standardized serodiagnosis of dogs infected with Trypanosoma cruzi using TESA (trypomastigote excreted-secreted antigen)-blot developed for human Chagas disease. TESA-blot showed 100% sensitivity and specificity. In contrast, ELISA using TESA (TESA-ELISA) or epimastigotes (epi-ELISA) as antigen yielded 100% sensitivity but specificity of 94.1% and 49.4%, respectively. When used in field studies in an endemic region for Chagas disease, visceral leishmaniasis and Trypanosoma evansi (Mato Grosso do Sul state, Central Brazil), positivities were 9.3% for TESA-blot, 10.7% for TESA-ELISA and 32% for epi-ELISA. Dogs from a non-endemic region for these infections (Rondonia state, western Amazonia) where T cruzi is enzootic showed positivity of 4.5% for TESA-blot and epi-ELISA and 6.8% for TESA-ELISA. Sera from urban dogs from Santos, Sao Paulo, where these diseases are absent, yielded negative results. TESA-blot was the only method that distinguished dogs infected with T cruzi from those infected with Leishmania chagasi and/or Trypanosoma evansi. (C) 2009 Published by Elsevier B.V.
Resumo:
Maize (Zea mays L.) is a very important cereal to world-wide economy which is also true for Brazil, particularly in the South region. Grain yield and plant height have been chosen as important criteria by breeders and farmers from Santa Catarina State (SC), Brazil. The objective of this work was to estimate genetic-statistic parameters associated with genetic gain for grain yield and plant height, in the first cycle of convergent-divergent half-sib selection in a maize population (MPA1) cultivated by farmers within the municipality of Anchieta (SC). Three experiments were carried out in different small farms at Anchieta using low external agronomic inputs; each experiment represented independent samples of half-sib families, which were evaluated in randomized complete blocks with three replications per location. Significant differences among half-sib families were observed for both variables in all experiments. The expected responses to truncated selection of the 25% better families in each experiment were 5.1, 5.8 and 5.2% for reducing plant height and 3.9, 5.7 and 5.0% for increasing grain yield, respectively. The magnitudes of genetic-statistic parameters estimated evidenced that the composite population MPA1 exhibits enough genetic variability to be used in cyclical process of recurrent selection. There were evidences that the genetic structure of the base population MPA1, as indicated by its genetic variability, may lead to expressive changes in the traits under selection, even under low selection pressure.
Resumo:
Rare HFE variants have been shown to be associated with hereditary hemochromatosis (HH), an iron overload disease. The low frequency of the HFE p.C282Y mutation in HH-affected Brazilian patients may suggest that other HFE-related mutations may also be implicated in the pathogenesis of HH in this population. The main aim was to screen for new HFE mutations in Brazilian individuals with primary iron overload and to investigate their relationship with HH. Fifty Brazilian patients with primary iron overload (transferrin saturation >50% in females and 60% in males) were selected. Subsequent bidirectional sequencing for each HFE exon was performed. The effect of HFE mutations on protein structure were analyzed by molecular dynamics simulation and free binding energy calculations. p.C282Y in homozygosis or in heterozygosis with p.H63D were the most frequent genotypic combinations associated with HH in our sample population (present in 17 individuals, 34%). Thirty-six (72.0%) out of the 50 individuals presented at least one HFE mutation. The most frequent genotype associated with HH was the homozygous p.C282Y mutation (n = 11, 22.0%). One novel mutation (p.V256I) was indentified in heterozygosis with the p.H63D mutation. In silico modeling analysis of protein behavior indicated that the p.V256I mutation does not reduce the binding affinity between HFE and beta 2-microglobulin ((beta 2M) in the same way the p.C282Y mutation does compared with the native HFE protein. In conclusion, screening of HFE through direct sequencing, as compared to p.C282Y/p.H63D genotyping, was not able to increase the molecular diagnosis yield of HH. The novel p.V256I mutation could not be implicated in the molecular basis of the HH phenotype, although its role cannot be completely excluded in HH-phenotype development. Our molecular modeling analysis can help in the analysis of novel, previously undescribed, HFE mutations. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
In this study, 20 Brazilian public schools have been assessed regarding good manufacturing practices and standard sanitation operating procedures implementation. We used a checklist comprised of 10 parts ( facilities and installations, water supply, equipments and tools, pest control, waste management, personal hygiene, sanitation, storage, documentation, and training), making a total of 69 questions. The implementing modification cost to the found nonconformities was also determined so that it could work with technical data as a based decision-making prioritization. The average nonconformity percentage at schools concerning to prerequisite program was 36%, from which 66% of them own inadequate installations, 65% waste management, 44% regarding documentation, and 35% water supply and sanitation. The initial estimated cost for changing has been U.S.$24,438 and monthly investments of 1.55% on the currently needed invested values. This would result in U.S.$0.015 increase on each served meal cost over the investment replacement within a year. Thus, we have concluded that such modifications are economically feasible and will be considered on technical requirements when prerequisite program implementation priorities are established.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.