860 resultados para Mathematical proficiency
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
In this paper we show how a nonlinear preprocessing of speech signal -with high noise- based on morphological filters improves the performance of robust algorithms for pitch tracking (RAPT). This result happens for a very simple morphological filter. More sophisticated ones could even improve such results. Mathematical morphology is widely used in image processing and has a great amount of applications. Almost all its formulations derived in the two-dimensional framework are easily reformulated to be adapted to one-dimensional context
Resumo:
Ample evidence indicates that inhibitory control (IC), a key executive component referring to the ability to suppress cognitive or motor processes, relies on a right-lateralized fronto-basal brain network. However, whether and how IC can be improved with training and the underlying neuroplastic mechanisms remains largely unresolved. We used functional and structural magnetic resonance imaging to measure the effects of 2 weeks of training with a Go/NoGo task specifically designed to improve frontal top-down IC mechanisms. The training-induced behavioral improvements were accompanied by a decrease in neural activity to inhibition trials within the right pars opercularis and triangularis, and in the left pars orbitalis of the inferior frontal gyri. Analyses of changes in brain anatomy induced by the IC training revealed increases in grey matter volume in the right pars orbitalis and modulations of white matter microstructure in the right pars triangularis. The task-specificity of the effects of training was confirmed by an absence of change in neural activity to a control working memory task. Our combined anatomical and functional findings indicate that differential patterns of functional and structural plasticity between and within inferior frontal gyri enhanced the speed of top-down inhibition processes and in turn IC proficiency. The results suggest that training-based interventions might help overcoming the anatomic and functional deficits of inferior frontal gyri manifesting in inhibition-related clinical conditions. More generally, we demonstrate how multimodal neuroimaging investigations of training-induced neuroplasticity enable revealing novel anatomo-functional dissociations within frontal executive brain networks. Hum Brain Mapp 36:2527-2543, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Peer-reviewed
Resumo:
The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.
Resumo:
In this work, a new mathematical equation correction approach for overcoming spectral and transport interferences was proposed. The proposal was applied to eliminate spectral interference caused by PO molecules at the 217.0005 nm Pb line, and the transport interference caused by variations in phosphoric acid concentrations. Correction may be necessary at 217.0005 nm to account for the contribution of PO, since Atotal217.0005 nm = A Pb217.0005 nm + A PO217.0005 nm. This may be easily done by measuring other PO wavelengths (e.g. 217.0458 nm) and calculating the relative contribution of PO absorbance (A PO) to the total absorbance (Atotal) at 217.0005 nm: A Pb217.0005 nm = Atotal217.0005 nm - A PO217.0005 nm = Atotal217.0005 nm - k (A PO217.0458 nm). The correction factor k is calculated from slopes of calibration curves built up for phosphorous (P) standard solutions measured at 217.0005 and 217.0458 nm, i.e. k = (slope217.0005 nm/slope217.0458 nm). For wavelength integrated absorbance of 3 pixels, sample aspiration rate of 5.0 ml min-1, analytical curves in the 0.1 - 1.0 mg L-1 Pb range with linearity better than 0.9990 were consistently obtained. Calibration curves for P at 217.0005 and 217.0458 nm with linearity better than 0.998 were obtained. Relative standard deviations (RSD) of measurements (n = 12) in the range of 1.4 - 4.3% and 2.0 - 6.0% without and with mathematical equation correction approach were obtained respectively. The limit of detection calculated to analytical line at 217.0005 nm was 10 µg L-1 Pb. Recoveries for Pb spikes were in the 97.5 - 100% and 105 - 230% intervals with and without mathematical equation correction approach, respectively.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
The present study aimed to determine the volumetric shrinkage rate of bean (Phaseolus vulgaris L.) seeds during air-drying under different conditions of air, temperature and relative humidity, and to adjust several mathematical models to the empiric values observed, and select the one that best represents the phenomenon. Six mathematical models were adjusted to the experimental values to represent the phenomenon. It was determined the degree of adjustment of each model from the value of the coefficient of determination, the behavior of the distribution of the residuals, and the magnitude of the average relative and estimated errors. The rate of volumetric shrinkage that occurred in bean seeds during drying is between 25 and 37%. It basically depends on the final moisture content, regardless of the air conditions during drying. The Modified Bala & Woods' model best represented the process.
Resumo:
In São Paulo State, mainly in rural areas, the utilization of wooden poles is observed for different purposes. In this context, wood in contact with the ground presents faster deterioration, which is generally associated to environmental factors and, especially to the presence of fungi and insects. With the use of mathematical models, the useful life of wooden structures can be predicted by obtaining "climatic indexes" to indicate, comparatively among the areas studied, which have more or less tendency to fungi and insects attacks. In this work, by using climatological data of several cities at São Paulo State, a simplified mathematical model was obtained to measure the aggressiveness of the wood in contact with the soil.
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I-100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.
Resumo:
This study aimed to apply mathematical models to the growth of Nile tilapia (Oreochromis niloticus) reared in net cages in the lower São Francisco basin and choose the model(s) that best represents the conditions of rearing for the region. Nonlinear models of Brody, Bertalanffy, Logistic, Gompertz, and Richards were tested. The models were adjusted to the series of weight for age according to the methods of Gauss, Newton, Gradiente and Marquardt. It was used the procedure "NLIN" of the System SAS® (2003) to obtain estimates of the parameters from the available data. The best adjustment of the data were performed by the Bertalanffy, Gompertz and Logistic models which are equivalent to explain the growth of the animals up to 270 days of rearing. From the commercial point of view, it is recommended that commercialization of tilapia from at least 600 g, which is estimated in the Bertalanffy, Gompertz and Logistic models for creating over 183, 181 and 184 days, and up to 1 Kg of mass , it is suggested the suspension of the rearing up to 244, 244 and 243 days, respectively.