824 resultados para C65 - Miscellaneous Mathematical Tools


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis attempts to gain insight on the mathematical approach to estuarine oceanographic systems; also to closely understand the governing processes in the estuarine environment as well as in the adjoining river reaches. The main aim was to conduct pollution routing for tropical estuarine environment namely, for the Cochin estuary. In this context, attempts have been made to apply, verify and validate the application of models, pre- prepared with necessary modifications to suit the area of interest by use of RIVMIX and WASP tools. Finally the thesis concludes by highlighting the advantages and limitations in modelling water bodies and concurrentlysimulates most of the possible scenarios within the purview of this work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study is intended to provide a new scientific approach to the solution of the worlds cost engineering problems encountered in the chemical industries in our nation. The problem is that of cost estimation of equipments especially of pressure vessels when setting up chemical industries .The present study attempts to develop a model for such cost estimation. This in turn is hoped would go a long way to solve this and related problems in forecasting the cost of setting up chemical plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonlinearity is a charming element of nature and Nonlinear Science has now become one of the most important tools for the fundamental understanding of the nature. Solitons— solutions of a class of nonlinear partial differential equations — which propagate without spreading and having particle— like properties represent one of the most striking aspects of nonlinear phenomena. The study of wave propagation through nonlinear media has wide applications in different branches of physics.Different mathematical techniques have been introduced to study nonlinear systems. The thesis deals with the study of some of the aspects of electromagnetic wave propagation through nonlinear media, viz, plasma and ferromagnets, using reductive perturbation method. The thesis contains 6 chapters

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Page 1. ICT Tools for Teaching & Learning G Santhosh Kumar Cochin University Page 2. Agenda • What is ICT? • Why integrate ICT in Education? • What are the Challenges? • What are the good resources? Page 3. Questions # “Despite the increasing use of ICT, the need for teachers is as great as ever” # “Placing ICT in schools will automatically improve the quality of education that children receive” #”The Internet is unsafe for children to use because there is so much dangerous material available on it” Page 4. What is ICT? • ICT is short for ..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this paper is to develop computationally efficient mathematical morphology operators on hypergraphs. To this aim we consider lattice structures on hypergraphs on which we build morphological operators. We develop a pair of dual adjunctions between the vertex set and the hyperedge set of a hypergraph , by defining a vertex-hyperedge correspondence. This allows us to recover the classical notion of a dilation/erosion of a subset of vertices and to extend it to subhypergraphs of . This paper also studies the concept of morphological adjunction on hypergraphs for which both the input and the output are hypergraphs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In der Praxis kommt es bei der spanenden Bearbeitung immer wieder zu großen Standwegunterschieden identischer Werkzeuge bei vordergründig identischen Bearbeitungsrandbedingungen. Insbesondere bei Fertigungsschritten, die das Bohren als Vorbearbeitung erfordern, kommt es gelegentlich zu atypischen Verschleißerscheinungen, die auf das Entstehen einer Neuhärtezone an der Werkstückoberfläche beim Bohren zurückgeführt werden. Grundsätzlich sind Randzonenveränderungen eine Folge der mechanischen und thermischen Beanspruchung bei der Bearbeitung. Beim Eindringen des Schneidkeils kommt es zu Kornverzerrungen, welche die Werkstückhärte bis in eine Tiefe von 40 bis 80 µm erhöhen können. Überdies wird die Randzone des Werkstücks durch den Bearbeitungsvorgang und den Spantransport erhitzt und durch den Kühlschmierstoff bzw. die so genannte Selbstabschreckung im Anschluss sehr schnell abgekühlt. So kann es in Abhängigkeit der Randbedingungen zu Gefügeänderungen mit härtesteigernder (Sekundärabschreckung) oder härtemindernder (Anlasseffekte) Wirkung kommen. Nicht zuletzt beeinflussen beide Beanspruchungsarten auch das Ausmaß der Eigenspannungen in der Werkstückoberfläche. In dieser Arbeit werden die beim Kernlochbohren erzeugten Randzonenveränderungen sowie die Standzeit von Folgebearbeitungswerkzeugen, wie Gewindebohrern und Gewindeformern, und deren Abhängigkeit vom Verschleißzustand des Kernlochbohrers untersucht. Weiterhin werden mit Hilfe einer Energiebilanz die Anteile herausgefiltert, die primär die Eigenschaften der Bohrungsrandzone beeinflussen. Dies geschieht mit Hilfe einer mathematischen Modellierung des Bohrprozesses, in der die Einflüsse der Schneidkantengeometrie, der Schneidkantenverrundung, der Schneidkantenfase sowie des Freiflächenverschleißes berücksichtigt werden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit werden verschiedene Computermodelle, Rechenverfahren und Methoden zur Unterstützung bei der Integration großer Windleistungen in die elektrische Energieversorgung entwickelt. Das Rechenmodell zur Simulation der zeitgleich eingespeisten Windenergie erzeugt Summenganglinien von beliebig zusammengestellten Gruppen von Windenergieanlagen, basierend auf gemessenen Wind- und Leistungsdaten der nahen Vergangenheit. Dieses Modell liefert wichtige Basisdaten für die Analyse der Windenergieeinspeisung auch für zukünftige Szenarien. Für die Untersuchung der Auswirkungen von Windenergieeinspeisungen großräumiger Anlagenverbünde im Gigawattbereich werden verschiedene statistische Analysen und anschauliche Darstellungen erarbeitet. Das im Rahmen dieser Arbeit entwickelte Modell zur Berechnung der aktuell eingespeisten Windenergie aus online gemessenen Leistungsdaten repräsentativer Windparks liefert wertvolle Informationen für die Leistungs- und Frequenzregelung der Netzbetreiber. Die zugehörigen Verfahren zur Ermittlung der repräsentativen Standorte und zur Überprüfung der Repräsentativität bilden die Grundlage für eine genaue Abbildung der Windenergieeinspeisung für größere Versorgungsgebiete, basierend auf nur wenigen Leistungsmessungen an Windparks. Ein weiteres wertvolles Werkzeug für die optimale Einbindung der Windenergie in die elektrische Energieversorgung bilden die Prognosemodelle, die die kurz- bis mittelfristig zu erwartende Windenergieeinspeisung ermitteln. In dieser Arbeit werden, aufbauend auf vorangegangenen Forschungsarbeiten, zwei, auf Künstlich Neuronalen Netzen basierende Modelle vorgestellt, die den zeitlichen Verlauf der zu erwarten Windenergie für Netzregionen und Regelzonen mit Hilfe von gemessenen Leistungsdaten oder prognostizierten meteorologischen Parametern zur Verfügung stellen. Die softwaretechnische Zusammenfassung des Modells zur Berechnung der aktuell eingespeisten Windenergie und der Modelle für die Kurzzeit- und Folgetagsprognose bietet eine attraktive Komplettlösung für die Einbindung der Windenergie in die Leitwarten der Netzbetreiber. Die dabei entwickelten Schnittstellen und die modulare Struktur des Programms ermöglichen eine einfache und schnelle Implementierung in beliebige Systemumgebungen. Basierend auf der Leistungsfähigkeit der Online- und Prognosemodelle werden Betriebsführungsstrategien für zu Clustern im Gigawattbereich zusammengefasste Windparks behandelt, die eine nach ökologischen und betriebswirtschaftlichen Gesichtspunkten sowie nach Aspekten der Versorgungssicherheit optimale Einbindung der geplanten Offshore-Windparks ermöglichen sollen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermoaktive Bauteilsysteme sind Bauteile, die als Teil der Raumumschließungsflächen über ein integriertes Rohrsystem mit einem Heiz- oder Kühlmedium beaufschlagt werden können und so die Beheizung oder Kühlung des Raumes ermöglichen. Die Konstruktionenvielfalt reicht nach diesem Verständnis von Heiz, bzw. Kühldecken über Geschoßtrenndecken mit kern-integrierten Rohren bis hin zu den Fußbodenheizungen. Die darin enthaltenen extrem trägen Systeme werden bewußt eingesetzt, um Energieangebot und Raumenergiebedarf unter dem Aspekt der rationellen Energieanwendung zeitlich zu entkoppeln, z. B. aktive Bauteilkühlung in der Nacht, passive Raumkühlung über das kühle Bauteil am Tage. Gebäude- und Anlagenkonzepte, die träge reagierende thermoaktive Bauteilsysteme vorsehen, setzen im kompetenten und verantwortungsvollen Planungsprozeß den Einsatz moderner Gebäudesimulationswerkzeuge voraus, um fundierte Aussagen über Behaglichkeit und Energiebedarf treffen zu können. Die thermoaktiven Bauteilsysteme werden innerhalb dieser Werkzeuge durch Berechnungskomponenten repräsentiert, die auf mathematisch-physikalischen Modellen basieren und zur Lösung des bauteilimmanenten mehrdimensionalen instationären Wärmeleitungsproblems dienen. Bisher standen hierfür zwei unterschiedliche prinzipielle Vorgehensweisen zur Lösung zur Verfügung, die der physikalischen Modellbildung entstammen und Grenzen bzgl. abbildbarer Geometrie oder Rechengeschwindigkeit setzen. Die vorliegende Arbeit dokumentiert eine neue Herangehensweise, die als experimentelle Modellbildung bezeichnet wird. Über den Weg der Systemidentifikation können aus experimentell ermittelten Datenreihen die Parameter für ein kompaktes Black-Box-Modell bestimmt werden, das das Eingangs-Ausgangsverhalten des zugehörigen beliebig aufgebauten thermoaktiven Bauteils mit hinreichender Genauigkeit widergibt. Die Meßdatenreihen lassen sich über hochgenaue Berechnungen generieren, die auf Grund ihrer Detailtreue für den unmittelbaren Einsatz in der Gebäudesimulation ungeeignet wären. Die Anwendung der Systemidentifikation auf das zweidimensionale Wärmeleitungsproblem und der Nachweis ihrer Eignung wird an Hand von sechs sehr unterschiedlichen Aufbauten thermoaktiver Bauteilsysteme durchgeführt und bestätigt sehr geringe Temperatur- und Energiebilanzfehler. Vergleiche zwischen via Systemidentifikation ermittelten Black-Box-Modellen und physikalischen Modellen für zwei Fußbodenkonstruktionen zeigen, daß erstgenannte auch als Referenz für Genauigkeitsabschätzungen herangezogen werden können. Die Praktikabilität des neuen Modellierungsansatzes wird an Fallstudien demonstriert, die Ganzjahressimulationen unter Bauteil- und Betriebsvariationen an einem exemplarischen Büroraum betreffen. Dazu erfolgt die Integration des Black-Box-Modells in das kommerzielle Gebäude- und Anlagensimulationsprogramm CARNOT. Die akzeptablen Rechenzeiten für ein Einzonen-Gebäudemodell in Verbindung mit den hohen Genauigkeiten bescheinigen die Eignung der neuen Modellierungsweise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of conceptual knowledge systems specifically requests knowledge acquisition tools within the framework of formal concept analysis. In this paper, the existing tools are presented, and furhter developments are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of development, both in empirical or theoretical research and in the practice of mathematics instruction and mathematics education, concerning problem solving, modelling, applications and relations to other subjects. In particular, we shall identify and discuss four major trends: a widened spectrum of arguments, an increased globality, an increased unification, and an extended use of computers. In the final part III we shall comment upon some important issues and problems related to our topic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims at giving a concise survey of the present state-of-the-art of mathematical modelling in mathematics education and instruction. It will consist of four parts. In part 1, some basic concepts relevant to the topic will be clarified and, in particular, mathematical modelling will be defined in a broad, comprehensive sense. Part 2 will review arguments for the inclusion of modelling in mathematics teaching at schools and universities, and identify certain schools of thought within mathematics education. Part 3 will describe the role of modelling in present mathematics curricula and in everyday teaching practice. Some obstacles for mathematical modelling in the classroom will be analysed, as well as the opportunities and risks of computer usage. In part 4, selected materials and resources for teaching mathematical modelling, developed in the last few years in America, Australia and Europe, will be presented. The examples will demonstrate many promising directions of development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of development, both in empirical or theoretical research and in the practice of mathematics instruction and mathematics education, concerning (applied) problem solving, modelling, applications and relations to other subjects. In particular, we shall identify and discuss four major trends: a widened spectrum of arguments, an increased globality, an increased unification, and an extended use of computers. In the final part III we shall comment upon some important issues and problems related to our topic.