966 resultados para Algebra of Errors
Resumo:
This paper discusses five strategies to deal with five types of errors in Qualitative Comparative Analysis (QCA): condition errors, systematic errors, random errors, calibration errors, and deviant case errors. These strategies are the comparative inspection of complex, intermediary, and parsimonious solutions; the use of an adjustment factor, the use of probabilistic criteria, the test of the robustness of calibration parameters, and the use of a frequency threshold for observed combinations of conditions. The strategies are systematically reviewed, assessed, and evaluated as regards their applicability, advantages, limitations, and complementarities.
Resumo:
37 insulin-dependent and non-insulin-dependent diabetics answered a multiple-choice questionnaire during inpatient educational sessions. 12 dietetic and 12 pathophysiologic questions had to be answered. Statistical analysis of factors influencing the number of errors can be summed up as follows: there is a direct correlation between age of the patient and number of errors; the older the patient, the greater the number of errors. However, insulin-dependent diabetics committed fewer errors than non-insulin-dependent subjects of the same age, which suggests greater motivation in the first group due to their treatment. The test likewise affords the patients an opportunity of reviewing unclear topics and enables the educational team to adapt their teaching to the patients.
Resumo:
This study examines syntactic and morphological aspects of the production and comprehension of pronouns by 99 typically developing French-speaking children aged 3 years, 5 months to 6 years, 5 months. A fine structural analysis of subject, object, and reflexive clitics suggests that whereas the object clitic chain crosses the subject chain, the reflexive clitic chain is nested within it. We argue that this structural difference introduces differences in processing complexity, chain crossing being more complex than nesting. In support of this analysis, both production and comprehension experiments show that children have more difficulty with object than with reflexive clitics (with more omissions in production and more erroneous judgments in sentences involving Principle B in comprehension). Concerning the morphological aspect, French subject and object pronouns agree in gender with their referent. We report serious difficulties with pronoun gender both in production and comprehension in children around the age of 4 (with nearly 30% errors in production and chance level judgments in comprehension), which tend to disappear by age 6. The distribution of errors further suggests that the masculine gender is processed as the default value. These findings provide further insights into the relationship between comprehension and production in the acquisition process.
Resumo:
We have designed and built an experimental device, which we called a "thermoelectric bridge." Its primary purpose is simultaneous measurement of the relative Peltier and Seebeck coefficients. The systematic errors for both coefficients are equal with this device and manipulation is not necessary between the measurement of one coefficient and the other. Thus, this device is especially suitable for verifying their linear relation postulated by Lord Kelvin. Also, simultaneous measurement of thermal conductivity is described in the text. A sample is made up of the couple nickel¿platinum, taking measurements in the range of ¿20¿60°C and establishing the dependence of each coefficient with temperature, with nearly equal random errors ±0.2%, and systematic errors estimated at ¿0.5%. The aforementioned Kelvin relation is verified in this range from these results, proving that the behavioral deviations are ¿0.3% contained in the uncertainty ±0.5% caused by the propagation of errors
Resumo:
In arbitrary dimensional spaces the Lie algebra of the Poincaré group is seen to be a subalgebra of the complex Galilei algebra, while the Galilei algebra is a subalgebra of Poincar algebra. The usual contraction of the Poincar to the Galilei group is seen to be equivalent to a certain coordinate transformation.
Resumo:
BACKGROUND: Chemotherapy is prescribed according to protocols of several cycles. These protocols include not only therapeutic agents but also adjuvant solvents and inherent supportive care measures. Multiple errors can occur during the prescription, the transmission of documents and the drug delivery processes, and lead to potentially serious consequences. OBJECTIVE: To assess the effect of a computerised physician order entry (CPOE) system on the number of errors in prescription recorded by the centralised chemotherapy unit of a pharmacy service in a university hospital. PATIENTS AND METHODS: Existing chemotherapy protocols were standardised by a multidisciplinary team (composed of a doctor, a pharmacist and a nurse) and a CPOE system was developed from a File Maker Pro database. Chemotherapy protocols were progressively introduced into the CPOE system. The effect of the system on prescribing errors was measured over 15 months before and 21 months after starting computerised protocol prescription. Errors were classified as major (dosage and drug name) and minor (volume or type of infusion solution). RESULTS: Before computerisation, 141 errors were recorded for 940 prescribed chemotherapy regimens (15%). After introduction of the CPOE system, 75 errors were recorded for 1505 prescribed chemotherapy regimens (5%). Of these errors, 69 (92%) were recorded in prescriptions that did not use a computerised protocol. A dramatic decrease in the number of errors was noticeable when 50% of the chemotherapy protocols were prescribed through the CPOE system. CONCLUSION: Errors in chemotherapy prescription nearly disappeared after implementation of CPOE. The safety of chemotherapy prescription was markedly improved.
Resumo:
Investigation of violent death, especially cases of sharp trauma and gunshot, is an important part of medico-legal investigations. Beside the execution of a conventional autopsy, the performance of a post-mortem Multi-Detector Computed Tomography (MDCT)-scan has become a highly appreciated tool. In order to investigate also the vascular system, post-mortem CT-angiography has been introduced. The most studied and widespread technique is the Multi-phase post-mortem CT-angiography (MPMCTA). Its sensitivity to detect vascular lesions is even superior to conventional autopsy. The application of MPMCTA for cases of gunshot and sharp-trauma is therefore an obvious choice, as vascular lesions are common in such victims. In most cases of sharp trauma and in several cases of gunshots, death can be attributed to exsanguinations. MPMCTA is able to detect the exact source of bleeding and also to visualize trajectories, which are of most importance in these cases. The reconstructed images allow to clearly visualizing the trajectory in a way that is easily comprehensible for not medically trained legal professionals. The sensitivity of MPMCTA for soft tissue and organ lesions approximately matches the sensitivity of conventional autopsy. However, special care, experience and effective use of the imaging software is necessary for performing the reconstructions of the trajectory. Large volume consuming haemorrhages and shift of inner organs are sources of errors and misinterpretations. This presentation shall give an overview about the advantages and limitations of the use of MPMCTA for investigating cases of gunshot and sharp-trauma.
Resumo:
We give a geometric description of the interpolating varieties for the algebra of Fourier transforms of distributions (or Beurling ultradistributions) with compact support on the real line.
Resumo:
When individuals learn by trial-and-error, they perform randomly chosen actions and then reinforce those actions that led to a high payoff. However, individuals do not always have to physically perform an action in order to evaluate its consequences. Rather, they may be able to mentally simulate actions and their consequences without actually performing them. Such fictitious learners can select actions with high payoffs without making long chains of trial-and-error learning. Here, we analyze the evolution of an n-dimensional cultural trait (or artifact) by learning, in a payoff landscape with a single optimum. We derive the stochastic learning dynamics of the distance to the optimum in trait space when choice between alternative artifacts follows the standard logit choice rule. We show that for both trial-and-error and fictitious learners, the learning dynamics stabilize at an approximate distance of root n/(2 lambda(e)) away from the optimum, where lambda(e) is an effective learning performance parameter depending on the learning rule under scrutiny. Individual learners are thus unlikely to reach the optimum when traits are complex (n large), and so face a barrier to further improvement of the artifact. We show, however, that this barrier can be significantly reduced in a large population of learners performing payoff-biased social learning, in which case lambda(e) becomes proportional to population size. Overall, our results illustrate the effects of errors in learning, levels of cognition, and population size for the evolution of complex cultural traits. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Factors affecting hospital admission and recovery stay duration of in-patient motor victims in Spain
Resumo:
Hospital expenses are a major cost driver of healthcare systems in Europe, with motor injuries being the leading mechanism of hospitalizations. This paper investigates the injury characteristics which explain the hospitalization of victims of traffic accidents that took place in Spain. Using a motor insurance database with 16.081 observations a generalized Tobit regression model is applied to analyse the factors that influence both the likelihood of being admitted to hospital after a motor collision and the length of hospital stay in the event of admission. The consistency of Tobit estimates relies on the normality of perturbation terms. Here a semi-parametric regression model was fitted to test the consistency of estimates, concluding that a normal distribution of errors cannot be rejected. Among other results, it was found that older men with fractures and injuries located in the head and lower torso are more likely to be hospitalized after the collision, and that they also have a longer expected length of hospital recovery stay.
Resumo:
The aim of this thesis was to analyze the background information of an activity-based costing system, which is being used in a domestic forest industry company. The reports produced by the system have not been reliable, and this has caused the utilization of the system to diminish. The study was initiated by examining the theory of activity-based costing. It was also discovered, that the system produces management accounting information and therefore also that theory was introduced briefly. Next the possible sources of errors were examined. The significance of these errors was evaluated and waste handling was chosen as a subject of further study. The problem regarding waste handling was that there is no waste compensation in current model. When paper or board machine produces waste, it can be used as raw material in the process. However, at the moment the product, which is being produced, at the time does not get any compensation. The use of compensation has not been possible due to not knowing the quantity of process waste. As a result of the study a calculatory model, which enables calculating the quantity of process waste based on the data from the mill system, was introduced. This, for one, enables starting to use waste compensation in the future.
Resumo:
In this dissertation, active galactic nuclei (AGN) are discussed, as they are seen with the high-resolution radio-astronomical technique called Very Long Baseline Interferometry (VLBI). This observational technique provides very high angular resolution (_ 10−300 = 1 milliarcsecond). VLBI observations, performed at different radio frequencies (multi-frequency VLBI), allow to penetrate deep into the core of an AGN to reveal an otherwise obscured inner part of the jet and the vicinity of the AGN’s central engine. Multi-frequency VLBI data are used to scrutinize the structure and evolution of the jet, as well as the distribution of the polarized emission. These data can help to derive the properties of the plasma and the magnetic field, and to provide constraints to the jet composition and the parameters of emission mechanisms. Also VLBI data can be used for testing the possible physical processes in the jet by comparing observational results with results of numerical simulations. The work presented in this thesis contributes to different aspects of AGN physics studies, as well as to the methodology of VLBI data reduction. In particular, Paper I reports evidence of optical and radio emission of AGN coming from the same region in the inner jet. This result was obtained via simultaneous observations of linear polarization in the optical and in radio using VLBI technique of a sample of AGN. Papers II and III describe, in detail, the jet kinematics of the blazar 0716+714, based on multi-frequency data, and reveal a peculiar kinematic pattern: plasma in the inner jet appears to move substantially faster that that in the large-scale jet. This peculiarity is explained by the jet bending, in Paper III. Also, Paper III presents a test of the new imaging technique for VLBI data, the Generalized Maximum Entropy Method (GMEM), with the observed (not simulated) data and compares its results with the conventional imaging. Papers IV and V report the results of observations of the circularly polarized (CP) emission in AGN at small spatial scales. In particular, Paper IV presents values of the core CP for 41 AGN at 15, 22 and 43 GHz, obtained with the help of the standard Gain transfer (GT) method, which was previously developed by D. Homan and J.Wardle for the calibration of multi-source VLBI observations. This method was developed for long multi-source observations, when many AGN are observed in a single VLBI run. In contrast, in Paper V, an attempt is made to apply the GT method to single-source VLBI observations. In such observations, the object list would include only a few sources: a target source and two or three calibrators, and it lasts much shorter than the multi-source experiment. For the CP calibration of a single-source observation, it is necessary to have a source with zero or known CP as one of the calibrators. If the archival observations included such a source to the list of calibrators, the GT could also be used for the archival data, increasing a list of known AGN with the CP at small spatial scale. Paper V contains also calculation of contributions of different sourced of errors to the uncertainty of the final result, and presents the first results for the blazar 0716+714.
Resumo:
A study about the spatial variability of data of soil resistance to penetration (RSP) was conducted at layers 0.0-0.1 m, 0.1-0.2 m and 0.2-0.3 m depth, using the statistical methods in univariate forms, i.e., using traditional geostatistics, forming thematic maps by ordinary kriging for each layer of the study. It was analyzed the RSP in layer 0.2-0.3 m depth through a spatial linear model (SLM), which considered the layers 0.0-0.1 m and 0.1-0.2 m in depth as covariable, obtaining an estimation model and a thematic map by universal kriging. The thematic maps of the RSP at layer 0.2-0.3 m depth, constructed by both methods, were compared using measures of accuracy obtained from the construction of the matrix of errors and confusion matrix. There are similarities between the thematic maps. All maps showed that the RSP is higher in the north region.
Resumo:
The goal of this study was to develop a fuzzy model to predict the occupancy rate of free-stalls facilities of dairy cattle, aiding to optimize the design of projects. The following input variables were defined for the development of the fuzzy system: dry bulb temperature (Tdb, °C), wet bulb temperature (Twb, °C) and black globe temperature (Tbg, °C). Based on the input variables, the fuzzy system predicts the occupancy rate (OR, %) of dairy cattle in free-stall barns. For the model validation, data collecting were conducted on the facilities of the Intensive System of Milk Production (SIPL), in the Dairy Cattle National Research Center (CNPGL) of Embrapa. The OR values, estimated by the fuzzy system, presented values of average standard deviation of 3.93%, indicating low rate of errors in the simulation. Simulated and measured results were statistically equal (P>0.05, t Test). After validating the proposed model, the average percentage of correct answers for the simulated data was 89.7%. Therefore, the fuzzy system developed for the occupancy rate prediction of free-stalls facilities for dairy cattle allowed a realistic prediction of stalls occupancy rate, allowing the planning and design of free-stall barns.