979 resultados para C65 - Miscellaneous Mathematical Tools
Resumo:
Abstract-The work reported in this paper is motivated by the need for developing swarm pattern transformation methodologies. Two methods, namely a macroscopic method and a mathematical method are investigated for pattern transformation. The first method is based on macroscopic parameters while the second method is based on both microscopic and macroscopic parameters. A formal definition to pattern transformation considering four special cases of transformation is presented. Simulations on a physics simulation engine are used to confirm the feasibility of the proposed transformation methods. A brief comparison between the two methods is also presented.
Resumo:
Equilibrium theory occupies an important position in chemistry and it is traditionally based on thermodynamics. A novel mathematical approach to chemical equilibrium theory for gaseous systems at constant temperature and pressure is developed. Six theorems are presented logically which illustrate the power of mathematics to explain chemical observations and these are combined logically to create a coherent system. This mathematical treatment provides more insight into chemical equilibrium and creates more tools that can be used to investigate complex situations. Although some of the issues covered have previously been given in the literature, new mathematical representations are provided. Compared to traditional treatments, the new approach relies on straightforward mathematics and less on thermodynamics, thus, giving a new and complementary perspective on equilibrium theory. It provides a new theoretical basis for a thorough and deep presentation of traditional chemical equilibrium. This work demonstrates that new research in a traditional field such as equilibrium theory, generally thought to have been completed many years ago, can still offer new insights and that more efficient ways to present the contents can be established. The work presented here can be considered appropriate as part of a mathematical chemistry course at University level.
Resumo:
Neural networks and wavelet transform have been recently seen as attractive tools for developing eficient solutions for many real world problems in function approximation. Function approximation is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. So, mathematical model is a very important tool to guarantee the development of the neural network area. In this article we will introduce one series of mathematical demonstrations that guarantee the wavelets properties for the PPS functions. As application, we will show the use of PPS-wavelets in pattern recognition problems of handwritten digit through function approximation techniques.
Resumo:
Logic courses represent a pedagogical challenge and the recorded number of cases of failures and of discontinuity in them is often high. Amont other difficulties, students face a cognitive overload to understand logical concepts in a relevant way. On that track, computational tools for learning are resources that help both in alleviating the cognitive overload scenarios and in allowing for the practical experimenting with theoretical concepts. The present study proposes an interactive tutorial, namely the TryLogic, aimed at teaching to solve logical conjectures either by proofs or refutations. The tool was developed from the architecture of the tool TryOcaml, through support of the communication of the web interface ProofWeb in accessing the proof assistant Coq. The goals of TryLogic are: (1) presenting a set of lessons for applying heuristic strategies in solving problems set in Propositional Logic; (2) stepwise organizing the exposition of concepts related to Natural Deduction and to Propositional Semantics in sequential steps; (3) providing interactive tasks to the students. The present study also aims at: presenting our implementation of a formal system for refutation; describing the integration of our infrastructure with the Virtual Learning Environment Moodle through the IMS Learning Tools Interoperability specification; presenting the Conjecture Generator that works for the tasks involving proving and refuting; and, finally to evaluate the learning experience of Logic students through the application of the conjecture solving task associated to the use of the TryLogic
Resumo:
The conventional power flow method is considered to be inadequate to obtain the maximum loading point because of the singularity of Jacobian matrix. Continuation methods are efficient tools for solving this kind of problem since different parameterization schemes can be used to avoid such ill-conditioning problems. This paper presents the details of new schemes for the parameterization step of the continuation power flow method. The new parameterization options are based on physical parameters, namely, the total power losses (real and reactive), the power at the slack bus (real or reactive), the reactive power at generation buses, and transmission line power losses (real and reactive). The simulation results obtained with the new approach for the IEEE test systems (14, 30, 57, and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are not only preserved but also improved.
Resumo:
IEEE 1451 Standard is intended to address the smart transducer interfacing problematic in network environments. Usually, proprietary hardware and software is a very efficient solution to in planent the IEEE 1451 normative, although can be expensive and inflexible. In contrast, the use of open and standardized tools for implementing the IEEE 1451 normative is proposed in this paper. Tools such as Java and Phyton programming languages, Linux, programmable logic technology, Personal Computer resources and Ethernet architecture were integrated in order to constructa network node based on the IEEE 1451 standards. The node can be applied in systems based on the client-server communication model The evaluation of the employed tools and expermental results are presented. © 2005 IEEE.
Resumo:
This paper seeks to apply a routine for highways detection through the mathematical morphology tools in high resolution image. The Mathematical Morphology theory consists of describing structures geometric presents quantitatively in the image (targets or features). This explains the use of the Mathematical Morphology in this work. As high resolution images will be used, the largest difficulty in the highways detection process is the presence of trees and automobiles in the borders tracks. Like this, for the obtaining of good results through the use of morphologic tools was necessary to choose the structuring element appropriately to be used in the functions. Through the appropriate choice of the morphologic operators and structuring elements it was possible to detect the highways tracks. The linear feature detection using mathematical morphology techniques, can contribute in cartographic applications, as cartographic products updating.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.
Resumo:
In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.
Resumo:
In this project we developed conductive thermoplastic resins by adding varying amounts of three different carbon fillers: carbon black (CB), synthetic graphite (SG) and multi-walled carbon nanotubes (CNT) to a polypropylene matrix for application as fuel cell bipolar plates. This component of fuel cells provides mechanical support to the stack, circulates the gases that participate in the electrochemical reaction within the fuel cell and allows for removal of the excess heat from the system. The materials fabricated in this work were tested to determine their mechanical and thermal properties. These materials were produced by adding varying amounts of single carbon fillers to a polypropylene matrix (2.5 to 15 wt.% Ketjenblack EC-600 JD carbon black, 10 to 80 wt.% Asbury Carbon's Thermocarb TC-300 synthetic graphite, and 2.5 to 15 wt.% of Hyperion Catalysis International's FIBRILTM multi-walled carbon nanotubes) In addition, composite materials containing combinations of these three fillers were produced. The thermal conductivity results showed an increase in both through-plane and in-plane thermal conductivities, with the largest increase observed for synthetic graphite. The Department of Energy (DOE) had previously set a thermal conductivity goal of 20 W/m·K, which was surpassed by formulations containing 75 wt.% and 80 wt.% SG, yielding in-plane thermal conductivity values of 24.4 W/m·K and 33.6 W/m·K, respectively. In addition, composites containing 2.5 wt.% CB, 65 wt.% SG, and 6 wt.% CNT in PP had an in–plane thermal conductivity of 37 W/m·K. Flexural and tensile tests were conducted. All composite formulations exceeded the flexural strength target of 25 MPa set by DOE. The tensile and flexural modulus of the composites increased with higher concentration of carbon fillers. Carbon black and synthetic graphite caused a decrease in the tensile and flexural strengths of the composites. However, carbon nanotubes increased the composite tensile and flexural strengths. Mathematical models were applied to estimate through-plane and in-plane thermal conductivities of single and multiple filler formulations, and tensile modulus of single-filler formulations. For thermal conductivity, Nielsen's model yielded accurate thermal conductivity values when compared to experimental results obtained through the Flash method. For prediction of tensile modulus Nielsen's model yielded the smallest error between the predicted and experimental values. The second part of this project consisted of the development of a curriculum in Fuel Cell and Hydrogen Technologies to address different educational barriers identified by the Department of Energy. By the creation of new courses and enterprise programs in the areas of fuel cells and the use of hydrogen as an energy carrier, we introduced engineering students to the new technologies, policies and challenges present with this alternative energy. Feedback provided by students participating in these courses and enterprise programs indicate positive acceptance of the different educational tools. Results obtained from a survey applied to students after participating in these courses showed an increase in the knowledge and awareness of energy fundamentals, which indicates the modules developed in this project are effective in introducing students to alternative energy sources.
Resumo:
Eutrophication is a persistent problem in many fresh water lakes. Delay in lake recovery following reductions in external loading of phosphorus, the limiting nutrient in fresh water ecosystems, is often observed. Models have been created to assist with lake remediation efforts, however, the application of management tools to sediment diagenesis is often neglected due to conceptual and mathematical complexity. SED2K (Chapra et al. 2012) is proposed as a "middle way", offering engineering rigor while being accessible to users. An objective of this research is to further support the development and application SED2K for sediment phosphorus diagenesis and release to the water column of Onondaga Lake. Application of SED2K has been made to eutrophic Lake Alice in Minnesota. The more homogenous sediment characteristics of Lake Alice, compared with the industrially polluted sediment layers of Onondaga Lake, allowed for an invariant rate coefficient to be applied to describe first order decay kinetics of phosphorus. When a similar approach was attempted on Onondaga Lake an invariant rate coefficient failed to simulate the sediment phosphorus profile. Therefore, labile P was accounted for by progressive preservation after burial and a rate coefficient which gradual decreased with depth was applied. In this study, profile sediment samples were chemically extracted into five operationally-defined fractions: CaCO3-P, Fe/Al-P, Biogenic-P, Ca Mineral-P and Residual-P. Chemical fractionation data, from this study, showed that preservation is not the only mechanism by which phosphorus may be maintained in a non-reactive state in the profile. Sorption has been shown to contribute substantially to P burial within the profile. A new kinetic approach involving partitioning of P into process based fractions is applied here. Results from this approach indicate that labile P (Ca Mineral and Organic P) is contributing to internal P loading to Onondaga Lake, through diagenesis and diffusion to the water column, while the sorbed P fraction (Fe/Al-P and CaCO3-P) is remaining consistent. Sediment profile concentrations of labile and total phosphorus at time of deposition were also modeled and compared with current labile and total phosphorus, to quantify the extent to which remaining phosphorus which will continue to contribute to internal P loading and influence the trophic status of Onondaga Lake. Results presented here also allowed for estimation of the depth of the active sediment layer and the attendant response time as well as the sediment burden of labile P and associated efflux.
Resumo:
Compliance with punctual delivery under the high pressure of costs can be implemented through the optimization of the in-house tool supply. Within the Transfer Project 13 of the Collaborative Research Centre 489 using the example of the forging industry, a mathematical model was developed which determines the minimum inventory of forging tools required for production, considering the tool appropriation delay.
Resumo:
We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at pixel resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
Resumo:
There is evidence that the climate changes and that now, the change is influenced and accelerated by the CO2 augmentation in atmosphere due to combustion by humans. Such ?Climate change? is on the policy agenda at the global level, with the aim of understanding and reducing its causes and to mitigate its consequences. In most countries and international organisms UNO (e.g. Rio de Janeiro 1992), OECD, EC, etc . . . the efforts and debates have been directed to know the possible causes, to predict the future evolution of some variable conditioners, and trying to make studies to fight against the effects or to delay the negative evolution of such. The Protocol of Kyoto 1997 set international efforts about CO2 emissions, but it was partial and not followed e.g. by USA and China . . . , and in Durban 2011 the ineffectiveness of humanity on such global real challenges was set as evident. Among all that, the elaboration of a global model was not boarded that can help to choose the best alternative between the feasible ones, to elaborate the strategies and to evaluate the costs, and the authors propose to enter in that frame for study. As in all natural, technological and social changes, the best-prepared countries will have the best bear and the more rapid recover. In all the geographic areas the alternative will not be the same one, but the model must help us to make the appropriated decision. It is essential to know those areas that are more sensitive to the negative effects of climate change, the parameters to take into account for its evaluation, and comprehensive plans to deal with it. The objective of this paper is to elaborate a mathematical model support of decisions, which will allow to develop and to evaluate alternatives of adaptation to the climatic change of different communities in Europe and Latin-America, mainly in especially vulnerable areas to the climatic change, considering in them all the intervening factors. The models will consider criteria of physical type (meteorological, edaphic, water resources), of use of the ground (agriculturist, forest, mining, industrial, urban, tourist, cattle dealer), economic (income, costs, benefits, infrastructures), social (population), politician (implementation, legislation), educative (Educational programs, diffusion) and environmental, at the present moment and the future. The intention is to obtain tools for aiding to get a realistic position for these challenges, which are an important part of the future problems of humanity in next decades.