965 resultados para Signal processing - Mathematical models
Resumo:
The need for high performance, high precision, and energy saving in rotating machinery demands an alternative solution to traditional bearings. Because of the contactless operation principle, the rotating machines employing active magnetic bearings (AMBs) provide many advantages over the traditional ones. The advantages such as contamination-free operation, low maintenance costs, high rotational speeds, low parasitic losses, programmable stiffness and damping, and vibration insulation come at expense of high cost, and complex technical solution. All these properties make the use of AMBs appropriate primarily for specific and highly demanding applications. High performance and high precision control requires model-based control methods and accurate models of the flexible rotor. In turn, complex models lead to high-order controllers and feature considerable computational burden. Fortunately, in the last few years the advancements in signal processing devices provide new perspective on the real-time control of AMBs. The design and the real-time digital implementation of the high-order LQ controllers, which focus on fast execution times, are the subjects of this work. In particular, the control design and implementation in the field programmable gate array (FPGA) circuits are investigated. The optimal design is guided by the physical constraints of the system for selecting the optimal weighting matrices. The plant model is complemented by augmenting appropriate disturbance models. The compensation of the force-field nonlinearities is proposed for decreasing the uncertainty of the actuator. A disturbance-observer-based unbalance compensation for canceling the magnetic force vibrations or vibrations in the measured positions is presented. The theoretical studies are verified by the practical experiments utilizing a custom-built laboratory test rig. The test rig uses a prototyping control platform developed in the scope of this work. To sum up, the work makes a step in the direction of an embedded single-chip FPGA-based controller of AMBs.
Resumo:
The aim of this study is to define a new statistic, PVL, based on the relative distance between the likelihood associated with the simulation replications and the likelihood of the conceptual model. Our results coming from several simulation experiments of a clinical trial show that the PVL statistic range can be a good measure of stability to establish when a computational model verifies the underlying conceptual model. PVL improves also the analysis of simulation replications because only one statistic is associated with all the simulation replications. As well it presents several verification scenarios, obtained by altering the simulation model, that show the usefulness of PVL. Further simulation experiments suggest that a 0 to 20 % range may define adequate limits for the verification problem, if considered from the viewpoint of an equivalence test.
Resumo:
Coherent anti-Stokes Raman scattering (CARS) microscopy is rapidly developing into a unique microscopic tool in biophysics, biology and the material sciences. The nonlinear nature of CARS spectroscopy complicates the analysis of the received spectra. There were developed mathematical methods for signal processing and for calculations spectra. Fourier self-deconvolution is a special high pass FFT filter which synthetically narrows the effective trace bandwidth features. As Fourier self-deconvolution can effectively reduce the noise, which may be at a higher spatial frequency than the peaks, without losing peak resolution. The idea of the work is to experiment the possibility of using wavelet decomposition in spectroscopic for background and noise removal, and Fourier transformation for linenarrowing.
Resumo:
The aim of this work is to compare two families of mathematical models for their respective capability to capture the statistical properties of real electricity spot market time series. The first model family is ARMA-GARCH models and the second model family is mean-reverting Ornstein-Uhlenbeck models. These two models have been applied to two price series of Nordic Nord Pool spot market for electricity namely to the System prices and to the DenmarkW prices. The parameters of both models were calibrated from the real time series. After carrying out simulation with optimal models from both families we conclude that neither ARMA-GARCH models, nor conventional mean-reverting Ornstein-Uhlenbeck models, even when calibrated optimally with real electricity spot market price or return series, capture the statistical characteristics of the real series. But in the case of less spiky behavior (System prices), the mean-reverting Ornstein-Uhlenbeck model could be seen to partially succeeded in this task.
Resumo:
Asian rust of soybean [Glycine max (L.) Merril] is one of the most important fungal diseases of this crop worldwide. The recent introduction of Phakopsora pachyrhizi Syd. & P. Syd in the Americas represents a major threat to soybean production in the main growing regions, and significant losses have already been reported. P. pachyrhizi is extremely aggressive under favorable weather conditions, causing rapid plant defoliation. Epidemiological studies, under both controlled and natural environmental conditions, have been done for several decades with the aim of elucidating factors that affect the disease cycle as a basis for disease modeling. The recent spread of Asian soybean rust to major production regions in the world has promoted new development, testing and application of mathematical models to assess the risk and predict the disease. These efforts have included the integration of new data, epidemiological knowledge, statistical methods, and advances in computer simulation to develop models and systems with different spatial and temporal scales, objectives and audience. In this review, we present a comprehensive discussion on the models and systems that have been tested to predict and assess the risk of Asian soybean rust. Limitations, uncertainties and challenges for modelers are also discussed.
Resumo:
In São Paulo State, mainly in rural areas, the utilization of wooden poles is observed for different purposes. In this context, wood in contact with the ground presents faster deterioration, which is generally associated to environmental factors and, especially to the presence of fungi and insects. With the use of mathematical models, the useful life of wooden structures can be predicted by obtaining "climatic indexes" to indicate, comparatively among the areas studied, which have more or less tendency to fungi and insects attacks. In this work, by using climatological data of several cities at São Paulo State, a simplified mathematical model was obtained to measure the aggressiveness of the wood in contact with the soil.
Resumo:
The broiler rectal temperature (t rectal) is one of the most important physiological responses to classify the animal thermal comfort. Therefore, the aim of this study was to adjust regression models in order to predict the rectal temperature (t rectal) of broiler chickens under different thermal conditions based on age (A) and a meteorological variable (air temperature - t air) or a thermal comfort index (temperature and humidity index -THI or black globe humidity index - BGHI) or a physical quantity enthalpy (H). In addition, through the inversion of these models and the expected t rectal intervals for each age, the comfort limits of t air, THI, BGHI and H for the chicks in the heating phase were determined, aiding in the validation of the equations and the preliminary limits for H. The experimental data used to adjust the mathematical models were collected in two commercial poultry farms, with Cobb chicks, from 1 to 14 days of age. It was possible to predict the t rectal of conditions from the expected t rectal and determine the lower and superior comfort thresholds of broilers satisfactorily by applying the four models adjusted; as well as to invert the models for prediction of the environmental H for the chicks first 14 days of life.
Resumo:
The draft forces of soil engaging tines and theoretical analysis compared to existing mathematical models, have yet not been studied in Rio Grande do Sul soils. From the existing models, those which can get the closest fitting draft forces to real measure on field have been established for two of Rio Grande do Sul soils. An Albaqualf and a Paleudult were evaluated. From the studied models, those suggested by Reece, so called "Universal Earthmoving Equation", Hettiaratchi and Reece, and Godwin and Spoor were the best fitting ones, comparing the calculated results with those measured "in situ". Allowing for the less complexity of Reece's model, it is suggested that this model should be used for modeling draft forces prediction for narrow tines in Albaqualf and Paleudut.
Resumo:
This study aimed to apply mathematical models to the growth of Nile tilapia (Oreochromis niloticus) reared in net cages in the lower São Francisco basin and choose the model(s) that best represents the conditions of rearing for the region. Nonlinear models of Brody, Bertalanffy, Logistic, Gompertz, and Richards were tested. The models were adjusted to the series of weight for age according to the methods of Gauss, Newton, Gradiente and Marquardt. It was used the procedure "NLIN" of the System SAS® (2003) to obtain estimates of the parameters from the available data. The best adjustment of the data were performed by the Bertalanffy, Gompertz and Logistic models which are equivalent to explain the growth of the animals up to 270 days of rearing. From the commercial point of view, it is recommended that commercialization of tilapia from at least 600 g, which is estimated in the Bertalanffy, Gompertz and Logistic models for creating over 183, 181 and 184 days, and up to 1 Kg of mass , it is suggested the suspension of the rearing up to 244, 244 and 243 days, respectively.
Resumo:
Lignocellulosic biomasses (e.g., wood and straws) are a potential renewable source for the production of a wide variety of chemicals that could be used to replace those currently produced by petrochemical industry. This would lead to lower greenhouse gas emissions and waste amounts, and to economical savings. There are many possible pathways available for the manufacturing of chemicals from lignocellulosic biomasses. One option is to hydrolyze the cellulose and hemicelluloses of these biomasses into monosaccharides using concentrated sulfuric acid as catalyst. This process is an efficient method for producing monosaccharides which are valuable platforn chemicals. Also other valuable products are formed in the hydrolysis. Unfortunately, the concentrated acid hydrolysis has been deemed unfeasible mainly due to high chemical consumption resulting from the need to remove sulfuric acid from the obtained hydrolysates prior to the downstream processing of the monosaccharides. Traditionally, this has been done by neutralization with lime. This, however, results in high chemical consumption. In addition, the by-products formed in the hydrolysis are not removed and may, thus, hinder the monosaccharide processing. In order to improve the feasibility of the concentrated acid hydrolysis, the chemical consumption should be decreased by recycling of sulfuric acid without neutralization. Furthermore, the monosaccharides and the other products formed in the hydrolysis should be recovered selectively for efficient downstream processing. The selective recovery of the hydrolysis by-products would have additional economical benefits on the process due to their high value. In this work, the use of chromatographic fractionation for the recycling of sulfuric acid and the selective recovery of the main components from the hydrolysates formed in the concentrated acid hydrolysis was investigated. Chromatographic fractionation based on the electrolyte exclusion with gel type strong acid cation exchange resins in acid (H+) form as a stationary phase was studied. A systematic experimental and model-based study regarding the separation task at hand was conducted. The phenomena affecting the separation were determined and their effects elucidated. Mathematical models that take accurately into account these phenomena were derived and used in the simulation of the fractionation process. The main components of the concentrated acid hydrolysates (sulfuric acid, monosaccharides, and acetic acid) were included into this model. Performance of the fractionation process was investigated experimentally and by simulations. Use of different process options was also studied. Sulfuric acid was found to have a significant co-operative effect on the sorption of the other components. This brings about interesting and beneficial effects in the column operations. It is especially beneficial for the separation of sulfuric acid and the monosaccharides. Two different approaches for the modelling of the sorption equilibria were investigated in this work: a simple empirical approach and a thermodynamically consistent approach (the Adsorbed Solution theory). Accurate modelling of the phenomena observed in this work was found to be possible using the simple empirical models. The use of the Adsorbed Solution theory is complicated by the nature of the theory and the complexity of the studied system. In addition to the sorption models, a dynamic column model that takes into account the volume changes of the gel type resins as changing resin bed porosity was also derived. Using the chromatography, all the main components of the hydrolysates can be recovered selectively, and the sulfuric acid consumption of the hydrolysis process can be lowered considerably. Investigation of the performance of the chromatographic fractionation showed that the highest separation efficiency in this separation task is obtained with a gel type resin with a high crosslinking degree (8 wt. %); especially when the hydrolysates contain high amounts of acetic acid. In addition, the concentrated acid hydrolysis should be done with as low sulfuric acid concentration as possible to obtain good separation performance. The column loading and flow rate also have large effects on the performance. In this work, it was demonstrated that when recycling of the fractions obtained in the chromatographic fractionation are recycled to preceding unit operations these unit operations should included in the performance evaluation of the fractionation. When this was done, the separation performance and the feasibility of the concentrated acid hydrolysis process were found to improve considerably. Use of multi-column chromatographic fractionation processes, the Japan Organo process and the Multi-Column Recycling Chromatography process, was also investigated. In the studied case, neither of these processes could compete with the single-column batch process in the productivity. However, due to internal recycling steps, the Multi-Column Recycling Chromatography was found to be superior to the batch process when the product yield and the eluent consumption were taken into account.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.
Resumo:
The partial replacement of NaCl by KCl is a promising alternative to produce a cheese with lower sodium content since KCl does not change the final quality of the cheese product. In order to assure proper salt proportions, mathematical models are employed to control the product process and simulate the multicomponent diffusion during the reduced salt cheese ripening period. The generalized Fick's Second Law is widely accepted as the primary mass transfer model within solid foods. The Finite Element Method (FEM) was used to solve the system of differential equations formed. Therefore, a NaCl and KCl multicomponent diffusion was simulated using a 20% (w/w) static brine with 70% NaCl and 30% KCl during Prato cheese (a Brazilian semi-hard cheese) salting and ripening. The theoretical results were compared with experimental data, and indicated that the deviation was 4.43% for NaCl and 4.72% for KCl validating the proposed model for the production of good quality, reduced-sodium cheeses.
Resumo:
Potato pulp waste (PPW) drying was investigated under different experimental conditions (temperatures from 50 to 70 °C and air flow from 0.06 to 0.092 m³ m- 2 s- 1) as a possible way to recover the waste generated by potato chip industries and to select the best-fit model to the experimental results of PPW drying. As a criterion to evaluate the fitting of mathematical models, a method based on the sum of the scores assigned to the four evaluated statistical parameters was used: regression coefficient (R²), relative mean error P (%), root mean square error (RMSE), and reduced chi-square (χ²). The results revealed that temperature and air velocity are important parameters to reduce PPW drying time. The models Midilli and Diffusion had the lowest sum values, i.e., with the best fit to the drying data, satisfactorily representing the drying kinetics of PPW.