876 resultados para Probabilistic Finite Automata
Resumo:
In the past few decades, the rise of criminal, civil and asylum cases involving young people lacking valid identification documents has generated an increase in the demand of age estimation. The chronological age or the probability that an individual is older or younger than a given age threshold are generally estimated by means of some statistical methods based on observations performed on specific physical attributes. Among these statistical methods, those developed in the Bayesian framework allow users to provide coherent and transparent assignments which fulfill forensic and medico-legal purposes. The application of the Bayesian approach is facilitated by using probabilistic graphical tools, such as Bayesian networks. The aim of this work is to test the performances of the Bayesian network for age estimation recently presented in scientific literature in classifying individuals as older or younger than 18 years of age. For these exploratory analyses, a sample related to the ossification status of the medial clavicular epiphysis available in scientific literature was used. Results obtained in the classification are promising: in the criminal context, the Bayesian network achieved, on the average, a rate of correct classifications of approximatively 97%, whilst in the civil context, the rate is, on the average, close to the 88%. These results encourage the continuation of the development and the testing of the method in order to support its practical application in casework.
Resumo:
Partial-thickness tears of the supraspinatus tendon frequently occur at its insertion on the greater tubercule of the humerus, causing pain and reduced strength and range of motion. The goal of this work was to quantify the loss of loading capacity due to tendon tears at the insertion area. A finite element model of the supraspinatus tendon was developed using in vivo magnetic resonance images data. The tendon was represented by an anisotropic hyperelastic constitutive law identified with experimental measurements. A failure criterion was proposed and calibrated with experimental data. A partial-thickness tear was gradually increased, starting from the deep articular-sided fibres. For different values of tendon tear thickness, the tendon was mechanically loaded up to failure. The numerical model predicted a loss in loading capacity of the tendon as the tear thickness progressed. Tendon failure was more likely when the tendon tear exceeded 20%. The predictions of the model were consistent with experimental studies. Partial-thickness tears below 40% tear are sufficiently stable to persist physiotherapeutic exercises. Above 60% tear surgery should be considered to restore shoulder strength.
Resumo:
Due to the rise of criminal, civil and administrative judicial situations involving people lacking valid identity documents, age estimation of living persons has become an important operational procedure for numerous forensic and medicolegal services worldwide. The chronological age of a given person is generally estimated from the observed degree of maturity of some selected physical attributes by means of statistical methods. However, their application in the forensic framework suffers from some conceptual and practical drawbacks, as recently claimed in the specialised literature. The aim of this paper is therefore to offer an alternative solution for overcoming these limits, by reiterating the utility of a probabilistic Bayesian approach for age estimation. This approach allows one to deal in a transparent way with the uncertainty surrounding the age estimation process and to produce all the relevant information in the form of posterior probability distribution about the chronological age of the person under investigation. Furthermore, this probability distribution can also be used for evaluating in a coherent way the possibility that the examined individual is younger or older than a given legal age threshold having a particular legal interest. The main novelty introduced by this work is the development of a probabilistic graphical model, i.e. a Bayesian network, for dealing with the problem at hand. The use of this kind of probabilistic tool can significantly facilitate the application of the proposed methodology: examples are presented based on data related to the ossification status of the medial clavicular epiphysis. The reliability and the advantages of this probabilistic tool are presented and discussed.
Resumo:
BACKGROUND: Although the importance of accurate femoral reconstruction to achieve a good functional outcome is well documented, quantitative data on the effects of a displacement of the femoral center of rotation on moment arms are scarce. The purpose of this study was to calculate moment arms after nonanatomical femoral reconstruction. METHODS: Finite element models of 15 patients including the pelvis, the femur, and the gluteal muscles were developed. Moment arms were calculated within the native anatomy and compared to distinct displacement of the femoral center of rotation (leg lengthening of 10 mm, loss of femoral offset of 20%, anteversion ±10°, and fixed anteversion at 15°). Calculations were performed within the range of motion observed during a normal gait cycle. RESULTS: Although with all evaluated displacements of the femoral center of rotation, the abductor moment arm remained positive, some fibers initially contributing to extension became antagonists (flexors) and vice versa. A loss of 20% of femoral offset led to an average decrease of 15% of abductor moment. Femoral lengthening and changes in femoral anteversion (±10°, fixed at 15°) led to minimal changes in abductor moment arms (maximum change of 5%). Native femoral anteversion correlated with the changes in moment arms induced by the 5 variations of reconstruction. CONCLUSION: Accurate reconstruction of offset is important to maintaining abductor moment arms, while changes of femoral rotation had minimal effects. Patients with larger native femoral anteversion appear to be more susceptible to femoral head displacements.
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.
Resumo:
The properties of spin polarized pure neutron matter and symmetric nuclear matter are studied using the finite range simple effective interaction, upon its parametrization revisited. Out of the total twelve parameters involved, we now determine ten of them from nuclear matter, against the nine parameters in our earlier calculation, as required in order to have predictions in both spin polarized nuclear matter and finite nuclei in unique manner being free from uncertainty found using the earlier parametrization. The information on the effective mass splitting in polarized neutron matter of the microscopic calculations is used to constrain the one more parameter, that was earlier determined from finite nucleus, and in doing so the quality of the description of finite nuclei is not compromised. The interaction with the new set of parameters is used to study the possibilities of ferromagnetic and antiferromagnetic transitions in completely polarized symmetric nuclear matter. Emphasis is given to analyze the results analytically, as far as possible, to elucidate the role of the interaction parameters involved in the predictions.
Resumo:
Una de las características del reaseguro finite risk es la existencia de una cuenta de experiencia, que está formada por las primas que cobra el reasegurador, junto con su rendimiento financiero,y su finalidad es financiar los siniestros que éste ha de satisfacer a la cedente en el plazo establecido. El objetivo de este trabajo es diseñar un modelo que permita determinar el saldo estimado o reserva que debe de tener en cada periodo anual la cuenta de experiencia para garantizar su solvencia dinámica, teniendo en cuenta la experiencia de siniestralidad de la cartera del reasegurador y de cada cedente. Para el cálculo de la prima de reaseguro y del saldo de la cuenta de experiencia se asumirá ambiente financiero estocástico, de modo que la prima de reaseguro dependerá también de otros parámetros como la volatilidad del tipo de interés o de la aversión al riesgo.
Resumo:
A generalized off-shell unitarity relation for the two-body scattering T matrix in a many-body medium at finite temperature is derived, through a consistent real-time perturbation expansion by means of Feynman diagrams. We comment on perturbation schemes at finite temperature in connection with an erroneous formulation of the Dyson equation in a paper recently published.
Resumo:
The focus of this dissertation is to develop finite elements based on the absolute nodal coordinate formulation. The absolute nodal coordinate formulation is a nonlinear finite element formulation, which is introduced for special requirements in the field of flexible multibody dynamics. In this formulation, a special definition for the rotation of elements is employed to ensure the formulation will not suffer from singularities due to large rotations. The absolute nodal coordinate formulation can be used for analyzing the dynamics of beam, plate and shell type structures. The improvements of the formulation are mainly concentrated towards the description of transverse shear deformation. Additionally, the formulation is verified by using conventional iso-parametric solid finite element and geometrically exact beam theory. Previous claims about especially high eigenfrequencies are studied by introducing beam elements based on the absolute nodal coordinate formulation in the framework of the large rotation vector approach. Additionally, the same high eigenfrequency problem is studied by using constraints for transverse deformation. It was determined that the improvements for shear deformation in the transverse direction lead to clear improvements in computational efficiency. This was especially true when comparative stress must be defined, for example when using elasto-plastic material. Furthermore, the developed plate element can be used to avoid certain numerical problems, such as shear and curvature lockings. In addition, it was shown that when compared to conventional solid elements, or elements based on nonlinear beam theory, elements based on the absolute nodal coordinate formulation do not lead to an especially stiff system for the equations of motion.
Resumo:
Cellular automata are models for massively parallel computation. A cellular automaton consists of cells which are arranged in some kind of regular lattice and a local update rule which updates the state of each cell according to the states of the cell's neighbors on each step of the computation. This work focuses on reversible one-dimensional cellular automata in which the cells are arranged in a two-way in_nite line and the computation is reversible, that is, the previous states of the cells can be derived from the current ones. In this work it is shown that several properties of reversible one-dimensional cellular automata are algorithmically undecidable, that is, there exists no algorithm that would tell whether a given cellular automaton has the property or not. It is shown that the tiling problem of Wang tiles remains undecidable even in some very restricted special cases. It follows that it is undecidable whether some given states will always appear in computations by the given cellular automaton. It also follows that a weaker form of expansivity, which is a concept of dynamical systems, is an undecidable property for reversible one-dimensional cellular automata. It is shown that several properties of dynamical systems are undecidable for reversible one-dimensional cellular automata. It shown that sensitivity to initial conditions and topological mixing are undecidable properties. Furthermore, non-sensitive and mixing cellular automata are recursively inseparable. It follows that also chaotic behavior is an undecidable property for reversible one-dimensional cellular automata.
Resumo:
This study aimed to describe the probabilistic structure of the annual series of extreme daily rainfall (Preabs), available from the weather station of Ubatuba, State of São Paulo, Brazil (1935-2009), by using the general distribution of extreme value (GEV). The autocorrelation function, the Mann-Kendall test, and the wavelet analysis were used in order to evaluate the presence of serial correlations, trends, and periodical components. Considering the results obtained using these three statistical methods, it was possible to assume the hypothesis that this temporal series is free from persistence, trends, and periodicals components. Based on quantitative and qualitative adhesion tests, it was found that the GEV may be used in order to quantify the probabilities of the Preabs data. The best results of GEV were obtained when the parameters of this function were estimated using the method of maximum likelihood. The method of L-moments has also shown satisfactory results.