897 resultados para Optical pattern recognition -- Mathematical models
Resumo:
In this paper, a modification for the high-order neural network (HONN) is presented. Third order networks are considered for achieving translation, rotation and scale invariant pattern recognition. They require however much storage and computation power for the task. The proposed modified HONN takes into account a priori knowledge of the binary patterns that have to be learned, achieving significant gain in computation time and memory requirements. This modification enables the efficient computation of HONNs for image fields of greater that 100 × 100 pixels without any loss of pattern information.
Resumo:
In this work the new pattern recognition method based on the unification of algebraic and statistical approaches is described. The main point of the method is the voting procedure upon the statistically weighted regularities, which are linear separators in two-dimensional projections of feature space. The report contains brief description of the theoretical foundations of the method, description of its software realization and the results of series of experiments proving its usefulness in practical tasks.
Resumo:
Peer reviewed
Resumo:
Quantitative methods can help us understand how underlying attributes contribute to movement patterns. Applying principal components analysis (PCA) to whole-body motion data may provide an objective data-driven method to identify unique and statistically important movement patterns. Therefore, the primary purpose of this study was to determine if athletes’ movement patterns can be differentiated based on skill level or sport played using PCA. Motion capture data from 542 athletes performing three sport-screening movements (i.e. bird-dog, drop jump, T-balance) were analyzed. A PCA-based pattern recognition technique was used to analyze the data. Prior to analyzing the effects of skill level or sport on movement patterns, methodological considerations related to motion analysis reference coordinate system were assessed. All analyses were addressed as case-studies. For the first case study, referencing motion data to a global (lab-based) coordinate system compared to a local (segment-based) coordinate system affected the ability to interpret important movement features. Furthermore, for the second case study, where the interpretability of PCs was assessed when data were referenced to a stationary versus a moving segment-based coordinate system, PCs were more interpretable when data were referenced to a stationary coordinate system for both the bird-dog and T-balance task. As a result of the findings from case study 1 and 2, only stationary segment-based coordinate systems were used in cases 3 and 4. During the bird-dog task, elite athletes had significantly lower scores compared to recreational athletes for principal component (PC) 1. For the T-balance movement, elite athletes had significantly lower scores compared to recreational athletes for PC 2. In both analyses the lower scores in elite athletes represented a greater range of motion. Finally, case study 4 reported differences in athletes’ movement patterns who competed in different sports, and significant differences in technique were detected during the bird-dog task. Through these case studies, this thesis highlights the feasibility of applying PCA as a movement pattern recognition technique in athletes. Future research can build on this proof-of-principle work to develop robust quantitative methods to help us better understand how underlying attributes (e.g. height, sex, ability, injury history, training type) contribute to performance.
Resumo:
This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.
Resumo:
This work presents a computational, called MOMENTS, code developed to be used in process control to determine a characteristic transfer function to industrial units when radiotracer techniques were been applied to study the unit´s performance. The methodology is based on the measuring the residence time distribution function (RTD) and calculate the first and second temporal moments of the tracer data obtained by two scintillators detectors NaI positioned to register a complete tracer movement inside the unit. Non linear regression technique has been used to fit various mathematical models and a statistical test was used to select the best result to the transfer function. Using the code MOMENTS, twelve different models can be used to fit a curve and calculate technical parameters to the unit.
Resumo:
Economic losses resulting from disease development can be reduced by accurate and early detection of plant pathogens. Early detection can provide the grower with useful information on optimal crop rotation patterns, varietal selections, appropriate control measures, harvest date and post harvest handling. Classical methods for the isolation of pathogens are commonly used only after disease symptoms. This frequently results in a delay in application of control measures at potentially important periods in crop production. This paper describes the application of both antibody and DNA based systems to monitor infection risk of air and soil borne fungal pathogens and the use of this information with mathematical models describing risk of disease associated with environmental parameters.
Resumo:
Although tyrosine kinase inhibitors (TKIs) such as imatinib have transformed chronic myelogenous leukemia (CML) into a chronic condition, these therapies are not curative in the majority of cases. Most patients must continue TKI therapy indefinitely, a requirement that is both expensive and that compromises a patient's quality of life. While TKIs are known to reduce leukemic cells' proliferative capacity and to induce apoptosis, their effects on leukemic stem cells, the immune system, and the microenvironment are not fully understood. A more complete understanding of their global therapeutic effects would help us to identify any limitations of TKI monotherapy and to address these issues through novel combination therapies. Mathematical models are a complementary tool to experimental and clinical data that can provide valuable insights into the underlying mechanisms of TKI therapy. Previous modeling efforts have focused on CML patients who show biphasic and triphasic exponential declines in BCR-ABL ratio during therapy. However, our patient data indicates that many patients treated with TKIs show fluctuations in BCR-ABL ratio yet are able to achieve durable remissions. To investigate these fluctuations, we construct a mathematical model that integrates CML with a patient's autologous immune response to the disease. In our model, we define an immune window, which is an intermediate range of leukemic concentrations that lead to an effective immune response against CML. While small leukemic concentrations provide insufficient stimulus, large leukemic concentrations actively suppress a patient's immune system, thus limiting it's ability to respond. Our patient data and modeling results suggest that at diagnosis, a patient's high leukemic concentration is able to suppress their immune system. TKI therapy drives the leukemic population into the immune window, allowing the patient's immune cells to expand and eventually mount an efficient response against the residual CML. This response drives the leukemic population below the immune window, causing the immune population to contract and allowing the leukemia to partially recover. The leukemia eventually reenters the immune window, thus stimulating a sequence of weaker immune responses as the two populations approach equilibrium. We hypothesize that a patient's autologous immune response to CML may explain the fluctuations in BCR-ABL ratio that are regularly seen during TKI therapy. These fluctuations may serve as a signature of a patient's individual immune response to CML. By applying our modeling framework to patient data, we are able to construct an immune profile that can then be used to propose patient-specific combination therapies aimed at further reducing a patient's leukemic burden. Our characterization of a patient's anti-leukemia immune response may be especially valuable in the study of drug resistance, treatment cessation, and combination therapy.
Resumo:
People, animals and the environment can be exposed to multiple chemicals at once from a variety of sources, but current risk assessment is usually carried out based on one chemical substance at a time. In human health risk assessment, ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, a wide group of fungal secondary metabolites that are known to potentially cause toxicity and carcinogenic outcomes. Mycotoxins are commonly found in a variety of foods including those intended for consumption by infants and young children and have been found in processed cereal-based foods available in the Portuguese market. The use of mathematical models, including probabilistic approaches using Monte Carlo simulations, constitutes a prominent issue in human health risk assessment in general and in mycotoxins exposure assessment in particular. The present study aims to characterize, for the first time, the risk associated with the exposure of Portuguese children to single and multiple mycotoxins present in processed cereal-based foods (CBF). Portuguese children (0-3 years old) food consumption data (n=103) were collected using a 3 days food diary. Contamination data concerned the quantification of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) were evaluated in 20 CBF samples marketed in 2014 and 2015 in Lisbon; samples were analyzed by HPLC-FLD, LC-MS/MS and GC-MS. Daily exposure of children to mycotoxins was performed using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) to the aflatoxin exposure. The magnitude of the MoE gives an indication of the risk level. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (ratio between exposure and a reference dose, HQ). For the cumulative risk assessment of multiple mycotoxins, the concentration addition (CA) concept was used. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. 71% of CBF analyzed samples were contaminated with mycotoxins (with values below the legal limits) and approximately 56% of the studied children consumed CBF at least once in these 3 days. Preliminary results showed that children exposure to single mycotoxins present in CBF were below the TDI. Aflatoxins MoE and MoET revealed a reduced potential risk by exposure through consumption of CBF (with values around 10000 or more). HQ and HI values for the remaining mycotoxins were below 1. Children are a particularly vulnerable population group to food contaminants and the present results point out an urgent need to establish legal limits and control strategies regarding the presence of multiple mycotoxins in children foods in order to protect their health. The development of packaging materials with antifungal properties is a possible solution to control the growth of moulds and consequently to reduce mycotoxin production, contributing to guarantee the quality and safety of foods intended for children consumption.
Resumo:
Close similarities have been found between the otoliths of sea-caught and laboratory-reared larvae of the common sole Solea solea (L.), given appropriate temperatures and nourishment of the latter. But from hatching to mouth formation. and during metamorphosis, sole otoliths have proven difficult to read because the increments may be less regular and low contrast. In this study, the growth increments in otoliths of larvae reared at 12 degrees C were counted by light microscopy to test the hypothesis of daily deposition, with some results verified using scanning electron microscopy (SEM), and by image analysis in order to compare the reliability of the 2 methods in age estimation. Age was first estimated (in days posthatch) from light micrographs of whole mounted otoliths. Counts were initiated from the increment formed at the time of month opening (Day 4). The average incremental deposition rate was consistent with the daily hypothesis. However, the light-micrograph readings tended to underestimate the mean ages of the larvae. Errors were probably associated with the low-contrast increments: those deposited after the mouth formation during the transition to first feeding, and those deposited from the onset of eye migration (about 20 d posthatch) during metamorphosis. SEM failed to resolve these low-contrast areas accurately because of poor etching. A method using image analysis was applied to a subsample of micrograph-counted otoliths. The image analysis was supported by an algorithm of pattern recognition (Growth Demodulation Algorithm, GDA). On each otolith, the GDA method integrated the growth pattern of these larval otoliths to averaged data from different radial profiles, in order to demodulate the exponential trend of the signal before spectral analysis (Fast Fourier Transformation, FFT). This second method both allowed more precise designation of increments, particularly for low-contrast areas, and more accurate readings but increased error in mean age estimation. The variability is probably due to a still rough perception of otolith increments by the GDA method, counting being achieved through a theoretical exponential pattern and mean estimates being given by FFT. Although this error variability was greater than expected, the method provides for improvement in both speed and accuracy in otolith readings.
Resumo:
We summarise the properties and the fundamental mathematical results associated with basic models which describe coagulation and fragmentation processes in a deterministic manner and in which cluster size is a discrete quantity (an integer multiple of some basic unit size). In particular, we discuss Smoluchowski's equation for aggregation, the Becker-Döring model of simultaneous aggregation and fragmentation, and more general models involving coagulation and fragmentation.
Resumo:
Interaction between the complement system and carbon nanotubes (CNTs) can modify their intended biomedical applications. Pristine and derivatised CNTs can activate complement primarily via the classical pathway which enhances uptake of CNTs and suppresses pro-inflammatory response by immune cells. Here, we report that the interaction of C1q, the classical pathway recognition molecule, with CNTs involves charge pattern and classical pathway activation that is partly inhibited by factor H, a complement regulator. C1q and its globular modules, but not factor H, enhanced uptake of CNTs by macrophages and modulated the pro-inflammatory immune response. Thus, soluble complement factors can interact differentially with CNTs and alter the immune response even without complement activation. Coating CNTs with recombinant C1q globular heads offers a novel way of controlling classical pathway activation in nanotherapeutics. Surprisingly, the globular heads also enhance clearance by phagocytes and down-regulate inflammation, suggesting unexpected complexity in receptor interaction. From the Clinical Editor: Carbon nanotubes (CNTs) maybe useful in the clinical setting as targeting drug carriers. However, it is also well known that they can interact and activate the complement system, which may have a negative impact on the applicability of CNTs. In this study, the authors functionalized multi-walled CNT (MWNT), and investigated the interaction with the complement pathway. These studies are important so as to gain further understanding of the underlying mechanism in preparation for future use of CNTs in the clinical setting.
Resumo:
Ecological models written in a mathematical language L(M) or model language, with a given style or methodology can be considered as a text. It is possible to apply statistical linguistic laws and the experimental results demonstrate that the behaviour of a mathematical model is the same of any literary text of any natural language. A text has the following characteristics: (a) the variables, its transformed functions and parameters are the lexic units or LUN of ecological models; (b) the syllables are constituted by a LUN, or a chain of them, separated by operating or ordering LUNs; (c) the flow equations are words; and (d) the distribution of words (LUM and CLUN) according to their lengths is based on a Poisson distribution, the Chebanov's law. It is founded on Vakar's formula, that is calculated likewise the linguistic entropy for L(M). We will apply these ideas over practical examples using MARIOLA model. In this paper it will be studied the problem of the lengths of the simple lexic units composed lexic units and words of text models, expressing these lengths in number of the primitive symbols, and syllables. The use of these linguistic laws renders it possible to indicate the degree of information given by an ecological model.
Resumo:
In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).
Resumo:
In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness.