956 resultados para Mathematical Techniques--Error Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory task. Literature provides surveys based on summaries of previous studies, which are often based on theoretical descriptions of the methods. An engineer, however, requires some evidence from experimental evaluations in order to make the appropriate decision when selecting a technique for a problem. This is what we have done in this paper: experimentally analyzed a set of representative state-of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation problem. This is an optimization problem in which drivers should be assigned to transport services, fulfilling some constraints and minimizing some function cost. The experimental results have provided us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime behavior, computational time, memory requirements, parameters, and free downloadable tools. Based on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is also helpful for other engineers facing a similar problem

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new methodology for the adjustment of fuzzy inference systems. A novel approach, which uses unconstrained optimization techniques, is developed in order to adjust the free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through an estimation of time series. More specifically, the Mackey-Glass chaotic time series estimation is used for the validation of the proposed methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surge flow phenomena. e.g.. as a consequence of a dam failure or a flash flood, represent free boundary problems. ne extending computational domain together with the discontinuities involved renders their numerical solution a cumbersome procedure. This contribution proposes an analytical solution to the problem, It is based on the slightly modified zero-inertia (ZI) differential equations for nonprismatic channels and uses exclusively physical parameters. Employing the concept of a momentum-representative cross section of the moving water body together with a specific relationship for describing the cross sectional geometry leads, after considerable mathematical calculus. to the analytical solution. The hydrodynamic analytical model is free of numerical troubles, easy to run, computationally efficient. and fully satisfies the law of volume conservation. In a first test series, the hydrodynamic analytical ZI model compares very favorably with a full hydrodynamic numerical model in respect to published results of surge flow simulations in different types of prismatic channels. In order to extend these considerations to natural rivers, the accuracy of the analytical model in describing an irregular cross section is investigated and tested successfully. A sensitivity and error analysis reveals the important impact of the hydraulic radius on the velocity of the surge, and this underlines the importance of an adequate description of the topography, The new approach is finally applied to simulate a surge propagating down the irregularly shaped Isar Valley in the Bavarian Alps after a hypothetical dam failure. The straightforward and fully stable computation of the flood hydrograph along the Isar Valley clearly reflects the impact of the strongly varying topographic characteristics on the How phenomenon. Apart from treating surge flow phenomena as a whole, the analytical solution also offers a rigorous alternative to both (a) the approximate Whitham solution, for generating initial values, and (b) the rough volume balance techniques used to model the wave tip in numerical surge flow computations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Epidermal growth factor receptor (EGFR) and its downstream factors KRAS and BRAF are mutated in several types of cancer, affecting the clinical response to EGFR inhibitors. Mutations in the EGFR kinase domain predict sensitivity to the tyrosine kinase inhibitors gefitinib and erlotinib in lung adenocarcinoma, while activating point mutations in KRAS and BRAF confer resistance to the anti-EGFR monoclonal antibody cetuximab in colorectal cancer. The development of new generation methods for systematic mutation screening of these genes will allow more appropriate therapeutic choices. METHODS: We describe a high resolution melting (HRM) assay for mutation detection in EGFR exons 19-21, KRAS codon 12/13 and BRAF V600 using formalin-fixed paraffin-embedded samples. Somatic variation of KRAS exon 2 was also analysed by massively parallel pyrosequencing of amplicons with the GS Junior 454 platform. RESULTS: We tested 120 routine diagnostic specimens from patients with colorectal or lung cancer. Mutations in KRAS, BRAF and EGFR were observed in 41.9%, 13.0% and 11.1% of the overall samples, respectively, being mutually exclusive. For KRAS, six types of substitutions were detected (17 G12D, 9 G13D, 7 G12C, 2 G12A, 2 G12V, 2 G12S), while V600E accounted for all the BRAF activating mutations. Regarding EGFR, two cases showed exon 19 deletions (delE746-A750 and delE746-T751insA) and another two substitutions in exon 21 (one showed L858R with the resistance mutation T590M in exon 20, and the other had P848L mutation). Consistent with earlier reports, our results show that KRAS and BRAF mutation frequencies in colorectal cancer were 44.3% and 13.0%, respectively, while EGFR mutations were detected in 11.1% of the lung cancer specimens. Ultra-deep amplicon pyrosequencing successfully validated the HRM results and allowed detection and quantitation of KRAS somatic mutations. CONCLUSIONS: HRM is a rapid and sensitive method for moderate-throughput cost-effective screening of oncogene mutations in clinical samples. Rather than Sanger sequence validation, next-generation sequencing technology results in more accurate quantitative results in somatic variation and can be achieved at a higher throughput scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The basis set superposition error-free second-order MØller-Plesset perturbation theory of intermolecular interactions was studied. The difficulties of the counterpoise (CP) correction in open-shell systems were also discussed. The calculations were performed by a program which was used for testing the new variants of the theory. It was shown that the CP correction for the diabatic surfaces should be preferred to the adiabatic ones

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a simple method to automate the geometric optimization of molecular orbital calculations of supermolecules on potential surfaces that are corrected for basis set superposition error using the counterpoise (CP) method. This method is applied to the H-bonding complexes HF/HCN, HF/H2O, and HCCH/H2O using the 6-31G(d,p) and D95 + + (d,p) basis sets at both the Hartree-Fock and second-order Møller-Plesset levels. We report the interaction energies, geometries, and vibrational frequencies of these complexes on the CP-optimized surfaces; and compare them with similar values calculated using traditional methods, including the (more traditional) single point CP correction. Upon optimization on the CP-corrected surface, the interaction energies become more negative (before vibrational corrections) and the H-bonding stretching vibrations decrease in all cases. The extent of the effects vary from extremely small to quite large depending on the complex and the calculational method. The relative magnitudes of the vibrational corrections cannot be predicted from the H-bond stretching frequencies alone

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we obtain sharp asymptotic formulas with error estimates for the Mellin con- volution of functions de ned on (0;1), and use these formulas to characterize the asymptotic behavior of marginal distribution densities of stock price processes in mixed stochastic models. Special examples of mixed models are jump-di usion models and stochastic volatility models with jumps. We apply our general results to the Heston model with double exponential jumps, and make a detailed analysis of the asymptotic behavior of the stock price density, the call option pricing function, and the implied volatility in this model. We also obtain similar results for the Heston model with jumps distributed according to the NIG law.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The basis set superposition error-free second-order MØller-Plesset perturbation theory of intermolecular interactions was studied. The difficulties of the counterpoise (CP) correction in open-shell systems were also discussed. The calculations were performed by a program which was used for testing the new variants of the theory. It was shown that the CP correction for the diabatic surfaces should be preferred to the adiabatic ones

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a simple method to automate the geometric optimization of molecular orbital calculations of supermolecules on potential surfaces that are corrected for basis set superposition error using the counterpoise (CP) method. This method is applied to the H-bonding complexes HF/HCN, HF/H2O, and HCCH/H2O using the 6-31G(d,p) and D95 + + (d,p) basis sets at both the Hartree-Fock and second-order Møller-Plesset levels. We report the interaction energies, geometries, and vibrational frequencies of these complexes on the CP-optimized surfaces; and compare them with similar values calculated using traditional methods, including the (more traditional) single point CP correction. Upon optimization on the CP-corrected surface, the interaction energies become more negative (before vibrational corrections) and the H-bonding stretching vibrations decrease in all cases. The extent of the effects vary from extremely small to quite large depending on the complex and the calculational method. The relative magnitudes of the vibrational corrections cannot be predicted from the H-bond stretching frequencies alone

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intension of this paper was to review and discuss some of the current quantitative analytical procedures which are used for quality control of pharmaceutical products. The selected papers were organized according to the analytical technique employed. Several techniques like ultraviolet/visible spectrophotometry, fluorimetry, titrimetry, electroanalytical techniques, chromatographic methods (thin-layer chromatography, gas chromatography and high-performance liquid chromatography), capillary electrophoresis and vibrational spectroscopies are the main techniques that have been used for the quantitative analysis of pharmaceutical compounds. In conclusion, although simple techniques such as UV/VIS spectrophotometry and TLC are still extensively employed, HPLC is the most popular instrumental technique used for the analysis of pharmaceuticals. Besides, a review of recent works in the area of pharmaceutical analysis showed a trend in the application of techniques increasingly rapid such as ultra performance liquid chromatography and the use of sensitive and specific detectors as mass spectrometers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A hierarchy of enzyme-catalyzed positive feedback loops is examined by mathematical and numerical analysis. Four systems are described, from the simplest, in which an enzyme catalyzes its own formation from an inactive precursor, to the most complex, in which two sequential feedback loops act in a cascade. In the latter we also examine the function of a long-range feedback, in which the final enzyme produced in the second loop activates the initial step in the first loop. When the enzymes generated are subject to inhibition or inactivation, all four systems exhibit threshold properties akin to excitable systems like neuron firing. For those that are amenable to mathematical analysis, expressions are derived that relate the excitation threshold to the kinetics of enzyme generation and inhibition and the initial conditions. For the most complex system, it was expedient to employ numerical simulation to demonstrate threshold behavior, and in this case long-range feedback was seen to have two distinct effects. At sufficiently high catalytic rates, this feedback is capable of exciting an otherwise subthreshold system. At lower catalytic rates, where the long-range feedback does not significantly affect the threshold, it nonetheless has a major effect in potentiating the response above the threshold. In particular, oscillatory behavior observed in simulations of sequential feedback loops is abolished when a long-range feedback is present.