990 resultados para Error Function
Resumo:
This was a prospective study of 43 septic neonates at the NICU of the School of Medicine of Botucatu, São Paulo State University. Clinical and laboratory data of sepsis were analyzed based on outcome divided into two groups, survival and death. We calculated the discriminatory power of the relevant variables for the diagnosis of sepsis in each group, and using software for Discriminant Analysis, a function was proposed. There were 43 septic cases with 31 survivals and 12 deaths. The variables that had the highest discriminatory power were: n(o) of compromised systems, the SNAP, FiO2, and (A-a)O2. The study of these and others variables, such as birth weight, n(o) of risk factors, and pH using a Linear Discriminant Function(LDF) allowed us to identify the high-risk neonates for death with a low error rate (8.33%). The LDF was: F = 0.00043 (birth weight) + 0.30367 (n(o) of risk factors) - 0.1171 (n(o) of compromised systems) + 0.33223 (SNAP) + 2.27972 (pH) - 14.96511 (FiO2) + 0.01814 ((A-a)O2). If F > 22.77 there was high risk of death. This study suggests that the LDF at the onset of sepsis is useful for the early identification of the high-risk neonates that need special clinical and laboratory surveillance.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Artificial Neural Networks are widely used in various applications in engineering, as such solutions of nonlinear problems. The implementation of this technique in reconfigurable devices is a great challenge to researchers by several factors, such as floating point precision, nonlinear activation function, performance and area used in FPGA. The contribution of this work is the approximation of a nonlinear function used in ANN, the popular hyperbolic tangent activation function. The system architecture is composed of several scenarios that provide a tradeoff of performance, precision and area used in FPGA. The results are compared in different scenarios and with current literature on error analysis, area and system performance. © 2013 IEEE.
Resumo:
A circuit for transducer linearizer tasks have been designed and built using discrete components and it implements by: a Radial Basis Function Network (RBFN) with three basis functions. The application in a linearized thermistor showed that the network has good approximation capabilities. The circuit advantages is the amplitude, width and center.
Resumo:
While beneficially decreasing the necessary incision size, arthroscopic hip surgery increases the surgical complexity due to loss of joint visibility. To ease such difficulty, a computer-aided mechanical navigation system was developed to present the location of the surgical tool relative to the patient¿s hip joint. A preliminary study reduced the position error of the tracking linkage with limited static testing trials. In this study, a correction method, including a rotational correction factor and a length correction function, was developed through more in-depth static testing. The developed correction method was then applied to additional static and dynamic testing trials to evaluate its effectiveness. For static testing, the position error decreased from an average of 0.384 inches to 0.153 inches, with an error reduction of 60.5%. Three parameters utilized to quantify error reduction of dynamic testing did not show consistent results. The vertex coordinates achieved 29.4% of error reduction, yet with large variation in the upper vertex. The triangular area error was reduced by 5.37%, however inconsistent among all five dynamic trials. Error of vertex angles increased, indicating a shape torsion using the developed correction method. While the established correction method effectively and consistently reduced position error in static testing, it did not present consistent results in dynamic trials. More dynamic paramters should be explored to quantify error reduction of dynamic testing, and more in-depth dynamic testing methodology should be conducted to further improve the accuracy of the computer-aided nagivation system.
Resumo:
A free-space optical (FSO) laser communication system with perfect fast-tracking experiences random power fading due to atmospheric turbulence. For a FSO communication system without fast-tracking or with imperfect fast-tracking, the fading probability density function (pdf) is also affected by the pointing error. In this thesis, the overall fading pdfs of FSO communication system with pointing errors are calculated using an analytical method based on the fast-tracked on-axis and off-axis fading pdfs and the fast-tracked beam profile of a turbulence channel. The overall fading pdf is firstly studied for the FSO communication system with collimated laser beam. Large-scale numerical wave-optics simulations are performed to verify the analytically calculated fading pdf with collimated beam under various turbulence channels and pointing errors. The calculated overall fading pdfs are almost identical to the directly simulated fading pdfs. The calculated overall fading pdfs are also compared with the gamma-gamma (GG) and the log-normal (LN) fading pdf models. They fit better than both the GG and LN fading pdf models under different receiver aperture sizes in all the studied cases. Further, the analytical method is expanded to the FSO communication system with beam diverging angle case. It is shown that the gamma pdf model is still valid for the fast-tracked on-axis and off-axis fading pdfs with point-like receiver aperture when the laser beam is propagated with beam diverging angle. Large-scale numerical wave-optics simulations prove that the analytically calculated fading pdfs perfectly fit the overall fading pdfs for both focused and diverged beam cases. The influence of the fast-tracked on-axis and off-axis fading pdfs, the fast-tracked beam profile, and the pointing error on the overall fading pdf is also discussed. At last, the analytical method is compared with the previous heuristic fading pdf models proposed since 1970s. Although some of previously proposed fading pdf models provide close fit to the experiment and simulation data, these close fits only exist under particular conditions. Only analytical method shows accurate fit to the directly simulated fading pdfs under different turbulence strength, propagation distances, receiver aperture sizes and pointing errors.
Resumo:
Many membrane proteins, including the GABA(A) [GABA (gamma-aminobutyric acid) type A] receptors, are oligomers often built from different subunits. As an example, the major adult isoform of the GABA(A) receptor is a pentamer built from three different subunits. Theoretically, co-expression of three subunits may result in many different receptor pentamers. Subunit concatenation allows us to pre-define the relative arrangement of the subunits. This method may thus be used to study receptor architecture, but also the nature of binding sites. Indeed, it made possible the discovery of a novel benzodiazepine site. We use here subunit concatenation to study delta-subunit-containing GABA(A) receptors. We provide evidence for the formation of different functional subunit arrangements in recombinant alpha(1)beta(3)delta and alpha(6)beta(3)delta receptors. As with all valuable techniques, subunit concatenation has also some pitfalls. Most of these can be avoided by carefully titrating and minimizing the length of the linker sequences joining the two linked subunits and avoiding inclusion of the signal sequence of all but the N-terminal subunit of a multi-subunit construct. Maybe the most common error found in the literature is that low expression can be overcome by simply overloading the expression system with genetic information. As some concatenated constructs result by themselves in a low level of expression, this erroneous assembly leading to receptor function may be promoted by overloading the expression system and leads to wrong conclusions.
Resumo:
This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.
Resumo:
Studies on the consequences of ocean acidification for the marine ecosystem have revealed behavioural changes in coral reef fishes exposed to sustained near-future CO2 levels. The changes have been linked to altered function of GABAergic neurotransmitter systems, because the behavioural alterations can be reversed rapidly by treatment with the GABAA receptor antagonist gabazine. Characterization of the molecular mechanisms involved would be greatly aided if these can be examined in a well-characterized model organism with a sequenced genome. It was recently shown that CO2-induced behavioural alterations are not confined to tropical species, but also affect the three-spined stickleback, although an involvement of the GABAA receptor was not examined. Here, we show that loss of lateralization in the stickleback can be restored rapidly and completely by gabazine treatment. This points towards a worrying universality of disturbed GABAA function after high-CO2 exposure in fishes from tropical to temperate marine habitats. Importantly, the stickleback is a model species with a sequenced and annotated genome, which greatly facilitates future studies on underlying molecular mechanisms.
Resumo:
Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.
Resumo:
Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.
Resumo:
The countermanding paradigm was designed to investigate the ability to cancel a prepotent response when a stop signal is presented and allows estimation of the stop signal response time (SSRT), an otherwise unobservable behaviour. Humans exhibit adaptive control of behaviour in the countermanding task, proactively lengthening response time (RT) in expectation of stopping and reactively lengthening RT following stop trials or errors. Human performance changes throughout the lifespan, with longer RT, SSRT and greater emphasis on post-error slowing reported for older compared to younger adults. Inhibition in the task has generally been improved by drugs that increase extracellular norepinephrine. The current thesis examined a novel choice response countermanding task in rats to explore whether rodent countermanding performance is a suitable model for the study of adaptive control of behaviour, lifespan changes in behavioural control and the role of neurotransmitters in these behaviours. Rats reactively adjusted RT in the countermanding task, shortening RT after consecutive correct go trials and lengthening RT following non-cancelled, but not cancelled stop trials, in sessions with a 10 s, but not a 1 s post-error timeout interval. Rats proactively lengthened RT in countermanding task sessions compared to go trial-only sessions. Together, these findings suggest that rats strategically lengthened RT in the countermanding task to improve accuracy and avoid longer, unrewarded timeout intervals. Next, rats exhibited longer RT and relatively conserved post-error slowing, but no significant change in SSRT when tested at 12, compared to 7 months of age, suggesting that rats exhibit changes in countermanding task performance with aging similar to those observed in humans. Finally, acute administration of yohimbine (1.25, 2.5 mg/kg) and d-amphetamine (0.25, 0.5 mg/kg), which putatively increase extracellular norepinephrine and dopamine respectively, resulted in RT shortening, baseline-dependent effects on SSRT, and attenuated adaptive RT adjustments in rats in the case of d-amphetamine. These findings suggest that dopamine and norepinephrine encouraged motivated, reward-seeking behaviour and supported inhibitory control in an inverted-U-like fashion. Taken together, these observations validate the rat countermanding task for further study of the neural correlates and neurotransmitters mediating adaptive control of behaviour and lifespan changes in behavioural control.
Resumo:
A new radiolarian-based transfer function for sea surface temperature (SST) estimations has been developed from 23 taxa and taxa groups in 53 surface sediment samples recovered between 35° and 72°S in the Atlantic sector of the Southern Ocean. For the selection of taxa and taxa groups ecological information from water column studies was considered. The transfer function allows the estimation of austral summer SST (December-March) ranging between -1 and 18°C with a standard error of estimate of 1.2°C. SST estimates from selected late Pleistocene squences were sucessfully compared with independend paleotemperature estimates derived from a diatom transfer function. This shows that radiolarians provide an excellent tool for paleotemperature reconstructions in Pleistocene sediments of the Southern Ocean.