953 resultados para Error analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The study evaluates the pre- and post-training lesion localisation ability of a group of novice observers. Parallels are drawn with the performance of inexperienced radiographers taking part in preliminary clinical evaluation (PCE) and ‘red-dot’ systems, operating within radiography practice. Materials and methods - Thirty-four novice observers searched 92 images for simulated lesions. Pre-training and post-training evaluations were completed following the free-response the receiver operating characteristic (FROC) method. Training consisted of observer performance methodology, the characteristics of the simulated lesions and information on lesion frequency. Jackknife alternative FROC (JAFROC) and highest rating inferred ROC analyses were performed to evaluate performance difference on lesion-based and case-based decisions. The significance level of the test was set at 0.05 to control the probability of Type I error. Results - JAFROC analysis (F(3,33) = 26.34, p < 0.0001) and highest-rating inferred ROC analysis (F(3,33) = 10.65, p = 0.0026) revealed a statistically significant difference in lesion detection performance. The JAFROC figure-of-merit was 0.563 (95% CI 0.512,0.614) pre-training and 0.677 (95% CI 0.639,0.715) post-training. Highest rating inferred ROC figure-of-merit was 0.728 (95% CI 0.701,0.755) pre-training and 0.772 (95% CI 0.750,0.793) post-training. Conclusions - This study has demonstrated that novice observer performance can improve significantly. This study design may have relevance in the assessment of inexperienced radiographers taking part in PCE or commenting scheme for trauma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aiming the establishment of simple and accurate readings of citric acid (CA) in complex samples, citrate (CIT) selective electrodes with tubular configuration and polymeric membranes plus a quaternary ammonium ion exchanger were constructed. Several selective membranes were prepared for this purpose, having distinct mediator solvents (with quite different polarities) and, in some cases, p-tert-octylphenol (TOP) as additive. The latter was used regarding a possible increase in selectivity. The general working characteristics of all prepared electrodes were evaluated in a low dispersion flow injection analysis (FIA) manifold by injecting 500µl of citrate standard solutions into an ionic strength (IS) adjuster carrier (10−2 mol l−1) flowing at 3ml min−1. Good potentiometric response, with an average slope and a repeatability of 61.9mV per decade and ±0.8%, respectively, resulted from selective membranes comprising additive and bis(2-ethylhexyl)sebacate (bEHS) as mediator solvent. The same membranes conducted as well to the best selectivity characteristics, assessed by the separated solutions method and for several chemical species, such as chloride, nitrate, ascorbate, glucose, fructose and sucrose. Pharmaceutical preparations, soft drinks and beers were analyzed under conditions that enabled simultaneous pH and ionic strength adjustment (pH = 3.2; ionic strength = 10−2 mol l−1), and the attained results agreed well with the used reference method (relative error < 4%). The above experimental conditions promoted a significant increase in sensitivity of the potentiometric response, with a supra-Nernstian slope of 80.2mV per decade, and allowed the analysis of about 90 samples per hour, with a relative standard deviation <1.0%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A flow injection analysis (FIA) system comprising a cysteine selective electrode as detection system was developed for determination of this amino acid in pharmaceuticals. Several electrodes were constructed for this purpose, having PVC membranes with different ionic exchangers and mediator solvents. Better working characteristics were attained with membranes comprising o-nitrophenyl octyl ether as mediator solvent and a tetraphenylborate based ionic-sensor. Injection of 500 µL standard solutions into an ionic strength adjuster carrier (3x10-3 M) of barium chloride flowing at 2.4mL min-1, showed linearity ranges from 5.0x10-5 to 5.0x10-3 M, with slopes of 76.4±0.6mV decade-1 and R2>0.9935. Slope decreased significantly under the requirement of a pH adjustment, selected at 4.5. Interference of several compounds (sodium, potassium, magnesium, barium, glucose, fructose, and sucrose) was estimated by potentiometric selectivity coefficients and considered negligible. Analysis of real samples were performed and considered accurate, with a relative error to an independent method of +2.7%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the impact of the CO2 opportunity cost on the wholesale electricity price in the context of the Iberian electricity market (MIBEL), namely on the Portuguese system, for the period corresponding to the Phase II of the European Union Emission Trading Scheme (EU ETS). In the econometric analysis a vector error correction model (VECM) is specified to estimate both long–run equilibrium relations and short–run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The model is estimated using daily spot market prices and the four commodities prices are jointly modelled as endogenous variables. Moreover, a set of exogenous variables is incorporated in order to account for the electricity demand conditions (temperature) and the electricity generation mix (quantity of electricity traded according the technology used). The outcomes for the Portuguese electricity system suggest that the dynamic pass–through of carbon prices into electricity prices is strongly significant and a long–run elasticity was estimated (equilibrium relation) that is aligned with studies that have been conducted for other markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a spatial econometrics analysis for the number of road accidents with victims in the smallest administrative divisions of Lisbon, considering as a baseline a log-Poisson model for environmental factors. Spatial correlation on data is investigated for data alone and for the residuals of the baseline model without and with spatial-autocorrelated and spatial-lagged terms. In all the cases no spatial autocorrelation was detected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A noncoherent vector delay/frequency-locked loop (VDFLL) architecture for GNSS receivers is proposed. A bank of code and frequency discriminators feeds a central extended Kalman filter that estimates the receiver's position and velocity, besides the clock error. The VDFLL architecture performance is compared with the one of the classic scalar receiver, both for scintillation and multipath scenarios, in terms of position errors. We show that the proposed solution is superior to the conventional scalar receivers, which tend to lose lock rapidly, due to the sudden drops of the received signal power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behavior of robotic manipulators with backlash is analyzed. Based on the pseudo-phase plane two indices are proposed to evaluate the backlash effect upon the robotic system: the root mean square error and the fractal dimension. For the dynamical analysis the noisy signals captured from the system are filtered through wavelets. Several tests are developed that demonstrate the coherence of the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proceedings of the Information Technology Applications in Biomedicine, Ioannina - Epirus, Greece, October 26-28, 2006

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in the fufillment of the requirements for the Degree of Master in Biomedical Engineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluated the efficacy of the Old Way/New Way methodology (Lyndon, 1989/2000) with regard to the permanent correction of a consolidated and automated technical error experienced by a tennis athlete (who is 18 years old and has been engaged in practice mode for about 6 years) in the execution of serves. Additionally, the study assessed the impact of intervention on the athlete’s psychological skills. An individualized intervention was designed using strategies that aimed to produce a) a detailed analysis of the error using video images; b) an increased kinaesthetic awareness; c) a reactivation of memory error; d) the discrimination and generalization of the correct motor action. The athlete’s psychological skills were measured with a Portuguese version of the Psychological Skills Inventory for Sports (Cruz & Viana, 1993). After the intervention, the technical error was corrected with great efficacy and an increase in the athlete’s psychological skills was verified. This study demonstrates the methodology’s efficacy, which is consistent with the effects of this type of intervention in different contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern telecommunication equipment requires components that operate in many different frequency bands and support multiple communication standards, to cope with the growing demand for higher data rate. Also, a growing number of standards are adopting the use of spectrum efficient digital modulations, such as quadrature amplitude modulation (QAM) and orthogonal frequency division multiplexing (OFDM). These modulation schemes require accurate quadrature oscillators, which makes the quadrature oscillator a key block in modern radio frequency (RF) transceivers. The wide tuning range characteristics of inductorless quadrature oscillators make them natural candidates, despite their higher phase noise, in comparison with LC-oscillators. This thesis presents a detailed study of inductorless sinusoidal quadrature oscillators. Three quadrature oscillators are investigated: the active coupling RC-oscillator, the novel capacitive coupling RCoscillator, and the two-integrator oscillator. The thesis includes a detailed analysis of the Van der Pol oscillator (VDPO). This is used as a base model oscillator for the analysis of the coupled oscillators. Hence, the three oscillators are approximated by the VDPO. From the nonlinear Van der Pol equations, the oscillators’ key parameters are obtained. It is analysed first the case without component mismatches and then the case with mismatches. The research is focused on determining the impact of the components’ mismatches on the oscillator key parameters: frequency, amplitude-, and quadrature-errors. Furthermore, the minimization of the errors by adjusting the circuit parameters is addressed. A novel quadrature RC-oscillator using capacitive coupling is proposed. The advantages of using the capacitive coupling are that it is noiseless, requires a small area, and has low power dissipation. The equations of the oscillation amplitude, frequency, quadrature-error, and amplitude mismatch are derived. The theoretical results are confirmed by simulation and by measurement of two prototypes fabricated in 130 nm standard complementary metal-oxide-semiconductor (CMOS) technology. The measurements reveal that the power increase due to the coupling is marginal, leading to a figure-of-merit of -154.8 dBc/Hz. These results are consistent with the noiseless feature of this coupling and are comparable to those of the best state-of-the-art RC-oscillators, in the GHz range, but with the lowest power consumption (about 9 mW). The results for the three oscillators show that the amplitude- and the quadrature-errors are proportional to the component mismatches and inversely proportional to the coupling strength. Thus, increasing the coupling strength decreases both the amplitude- and quadrature-errors. With proper coupling strength, a quadrature error below 1° and amplitude imbalance below 1% are obtained. Furthermore, the simulations show that increasing the coupling strength reduces the phase noise. Hence, there is no trade-off between phase noise and quadrature error. In the twointegrator oscillator study, it was found that the quadrature error can be eliminated by adjusting the transconductances to compensate the capacitance mismatch. However, to obtain outputs in perfect quadrature one must allow some amplitude error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La verificación y el análisis de programas con características probabilistas es una tarea necesaria del quehacer científico y tecnológico actual. El éxito y su posterior masificación de las implementaciones de protocolos de comunicación a nivel hardware y soluciones probabilistas a problemas distribuidos hacen más que interesante el uso de agentes estocásticos como elementos de programación. En muchos de estos casos el uso de agentes aleatorios produce soluciones mejores y más eficientes; en otros proveen soluciones donde es imposible encontrarlas por métodos tradicionales. Estos algoritmos se encuentran generalmente embebidos en múltiples mecanismos de hardware, por lo que un error en los mismos puede llegar a producir una multiplicación no deseada de sus efectos nocivos.Actualmente el mayor esfuerzo en el análisis de programas probabilísticos se lleva a cabo en el estudio y desarrollo de herramientas denominadas chequeadores de modelos probabilísticos. Las mismas, dado un modelo finito del sistema estocástico, obtienen de forma automática varias medidas de performance del mismo. Aunque esto puede ser bastante útil a la hora de verificar programas, para sistemas de uso general se hace necesario poder chequear especificaciones más completas que hacen a la corrección del algoritmo. Incluso sería interesante poder obtener automáticamente las propiedades del sistema, en forma de invariantes y contraejemplos.En este proyecto se pretende abordar el problema de análisis estático de programas probabilísticos mediante el uso de herramientas deductivas como probadores de teoremas y SMT solvers. Las mismas han mostrado su madurez y eficacia en atacar problemas de la programación tradicional. Con el fin de no perder automaticidad en los métodos, trabajaremos dentro del marco de "Interpretación Abstracta" el cual nos brinda un delineamiento para nuestro desarrollo teórico. Al mismo tiempo pondremos en práctica estos fundamentos mediante implementaciones concretas que utilicen aquellas herramientas.