990 resultados para Optimal values
Resumo:
In this paper, we study a k-out-of-n system with single server who provides service to external customers also. The system consists of two parts:(i) a main queue consisting of customers (failed components of the k-out-of-n system) and (ii) a pool (of finite capacity M) of external customers together with an orbit for external customers who find the pool full. An external customer who finds the pool full on arrival, joins the orbit with probability and with probability 1− leaves the system forever. An orbital customer, who finds the pool full, at an epoch of repeated attempt, returns to orbit with probability (< 1) and with probability 1 − leaves the system forever. We compute the steady state system size probability. Several performance measures are computed, numerical illustrations are provided.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
Xylanases with hydrolytic activity on xylan, one of the hemicellulosic materials present in plant cell walls, have been identified long back and the applicability of this enzyme is constantly growing. All these applications especially the pulp and paper industries require novel enzymes. There has been lot of documentation on microbial xylanases, however, none meeting all the required characteristics. The characters being sought are: higher production, higher pH and temperature optima, good stabilities under these conditions and finally the low associated cellulase and protease production. The present study analyses various facets of xylanase biotechnology giving emphasis on bacterial xylanases. Fungal xylanases are having problems like low pH values for both enzyme activity and growth. Moreover, the associated production of cellulases at significant levels make fungal xylanases less suitable for application in paper and pulp industries.Bacillus SSP-34 selected from 200 isolates was clearly having xylan catabolizing nature distinct from earlier reports. The stabilities at higher temperatures and pH values along with the optimum conditions for pH and temperature is rendering Bacillus SSP-34 xylanase more suitable than many of the previous reports for application in pulp and paper industries.Bacillus SSP-34 is an alkalophilic thertmotolerant bacteria which under optimal cultural conditions as mentioned earlier, can produce 2.5 times more xylanase than the basal medium.The 0.5% xylan concentration in the medium was found to the best carbon source resulting in 366 IU/ml of xylanase activity. This induction was subjected to catabolite repression by glucose. Xylose was a good inducer for xylanase production. The combination of yeast extract and peptone selected from several nitrogen sources resulted in the highest enzyme production (379+-0.2 IU/ml) at the optimum final concentration of 0.5%. All the cultural and nutritional parameters were compiled and comparative study showed that the modified medium resulted in xylanase activity of 506 IU/ml, 5 folds higher than the basal medium.The novel combination of purification techniques like ultrafiltraton, ammonium sulphate fractionation, DEAE Sepharose anion exchange chromatography, CM Sephadex cation exchange chromatography and Gel permeation chromatography resulted in the purified xylanase having a specific activity of 1723 U/mg protein with 33.3% yield. The enzyme was having a molecular weight of 20-22 kDa. The Km of the purified xylanase was 6.5 mg of oat spelts xylan per ml and Vmax 1233 µ mol/min/mg protein.Bacillus SSP-34 xylanase resulted in the ISO brightness increase from 41.1% to 48.5%. The hydrolytic nature of the xylanase was in the endo-form.Thus the organism Bacillus SSP-34 was having interesting biotechnological and physiological aspects. The SSP-34 xylanase having desired characters seems to be suited for application in paper and pulp industries.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
We design optimal band pass filters for electrons in semiconductor heterostructures, under a uniform applied electric field. The inner cells are chosen to provide a desired transmission window. The outer cells are then designed to transform purely incoming or outgoing waves into Bloch states of the inner cells. The transfer matrix is interpreted as a conformal mapping in the complex plane, which allows us to write constraints on the outer cell parameters, from which physically useful values can be obtained.
Resumo:
In a recent paper A. S. Johal and D. J. Dunstan [Phys. Rev. B 73, 024106 (2006)] have applied multivariate linear regression analysis to the published data of the change in ultrasonic velocity with applied stress. The aim is to obtain the best estimates for the third-order elastic constants in cubic materials. From such an analysis they conclude that uniaxial stress data on metals turns out to be nearly useless by itself. The purpose of this comment is to point out that by a proper analysis of uniaxial stress data it is possible to obtain reliable values of third-order elastic constants in cubic metals and alloys. Cu-based shape memory alloys are used as an illustrative example.
Resumo:
We present our recent achievements in the growing and optical characterization of KYb(WO4)2 (hereafter KYbW) crystals and demonstrate laser operation in this stoichiometric material. Single crystals of KYbW with optimal crystalline quality have been grown by the top-seeded-solution growth slow-cooling method. The optical anisotropy of this monoclinic crystal has been characterized, locating the tensor of the optical indicatrix and measuring the dispersion of the principal values of the refractive indices as well as the thermo-optic coefficients. Sellmeier equations have been constructed valid in the visible and near-IR spectral range. Raman scattering has been used to determine the phonon energies of KYbW and a simple physical model is applied for classification of the lattice vibration modes. Spectroscopic studies (absorption and emission measurements at room and low temperature) have been carried out in the spectral region near 1 µm characteristic for the ytterbium transition. Energy positions of the Stark sublevels of the ground and the excited state manifolds have been determined and the vibronic substructure has been identified. The intrinsic lifetime of the upper laser level has been measured taking care to suppress the effect of reabsorption and the intrinsic quantum efficiency has been estimated. Lasing has been demonstrated near 1074 nm with 41% slope efficiency at room temperature using a 0.5 mm thin plate of KYbW. This laser material holds great promise for diode pumped high-power lasers, thin disk and waveguide designs as well as for ultrashort (ps/fs) pulse laser systems.
Resumo:
The electronic structure and properties of cerium oxides (CeO2 and Ce2O3) have been studied in the framework of the LDA+U and GGA(PW91)+U implementations of density functional theory. The dependence of selected observables of these materials on the effective U parameter has been investigated in detail. The examined properties include lattice constants, bulk moduli, density of states, and formation energies of CeO2 and Ce2O3. For CeO2, the LDA+U results are in better agreement with experiment than the GGA+U results whereas for the computationally more demanding Ce2O3 both approaches give comparable accuracy. Furthermore, as expected, Ce2O3 is much more sensitive to the choice of the U value. Generally, the PW91 functional provides an optimal agreement with experiment at lower U energies than LDA does. In order to achieve a balanced description of both kinds of materials, and also of nonstoichiometric CeO2¿x phases, an appropriate choice of U is suggested for LDA+U and GGA+U schemes. Nevertheless, an optimum value appears to be property dependent, especially for Ce2O3. Optimum U values are found to be, in general, larger than values determined previously in a self-consistent way.
Resumo:
This paper presents a Reinforcement Learning (RL) approach to economic dispatch (ED) using Radial Basis Function neural network. We formulate the ED as an N stage decision making problem. We propose a novel architecture to store Qvalues and present a learning algorithm to learn the weights of the neural network. Even though many stochastic search techniques like simulated annealing, genetic algorithm and evolutionary programming have been applied to ED, they require searching for the optimal solution for each load demand. Also they find limitation in handling stochastic cost functions. In our approach once we learn the Q-values, we can find the dispatch for any load demand. We have recently proposed a RL approach to ED. In that approach, we could find only the optimum dispatch for a set of specified discrete values of power demand. The performance of the proposed algorithm is validated by taking IEEE 6 bus system, considering transmission losses
Resumo:
This paper presents the optimal design of a surface mounted permanent-magnet (PM) Brushless direct-current (BLDC) motor meant for spacecraft applications. The spacecraft applications requires the choice of a motor with high torque density, minimum cogging torque, better positional stability and high torque to inertia ratio. Performance of two types of machine configurations viz Slotted PMBLDC and Slotless PMBLDC with Halbach array are compared with the help of analytical and finite element (FE) methods. It is found that unlike a Slotted PMBLDC motor, the Slotless type with Halbach array develops zero cogging torque without reduction in the developed torque. Moreover, the machine being coreless provides high torque to inertia ratio and zero magnetic stiction
Resumo:
Hindi
Resumo:
In a previous paper we have determined a generic formula for the polynomial solution families of the well-known differential equation of hypergeometric type σ(x)y"n(x)+τ(x)y'n(x)-λnyn(x)=0. In this paper, we give another such formula which enables us to present a generic formula for the values of monic classical orthogonal polynomials at their boundary points of definition.
Resumo:
Angepasste Kommunikationssysteme für den effizienten Einsatz in dezentralen elektrischen Versorgungsstrukturen - In öffentlichen Elektrizitätsnetzen wird der Informationsaustausch seit längerem durch historisch gewachsene und angepasste Systeme erfolgreich bewerkstelligt. Basierend auf einem weiten Erfahrungsspektrum und einer gut ausgebauten Kommunikationsinfrastruktur stellt die informationstechnische Anbindung eines Teilnehmers im öffentlichen Versorgungsnetz primär kein Hemmnis dar. Anders gestaltet sich dagegen die Situation in dezentralen Versorgungsstrukturen. Da die Elektrifizierung von dezentralen Versorgungsgebieten, mittels der Vernetzung vieler verteilter Erzeugungsanlagen und des Aufbaus von nicht an das öffentliche Elektrizitätsnetz angeschlossenen Verteilnetzen (Minigrids), erst in den letzten Jahren an Popularität gewonnen hat, sind nur wenige Projekte bis dato abgeschlossen. Für die informationstechnische Anbindung von Teilnehmern in diesen Strukturen bedeutet dies, dass nur in einem sehr begrenzten Umfang auf Erfahrungswerte bei der Systemauswahl zurückgegriffen werden kann. Im Rahmen der Dissertation ist deshalb ein Entscheidungsfindungsprozess (Leitfaden für die Systemauswahl) entwickelt worden, der neben einem direkten Vergleich von Kommunikationssystemen basierend auf abgeleiteten Bewertungskriterien und Typen, der Reduktion des Vergleichs auf zwei Systemwerte (relativer Erwartungsnutzenzuwachs und Gesamtkostenzuwachs), die Wahl eines geeigneten Kommunikationssystems für die Applikation in dezentralen elektrischen Versorgungsstrukturen ermöglicht. In Anlehnung an die klassische Entscheidungstheorie werden mit der Berechnung eines Erwartungsnutzens je Kommunikationssystems, aus der Gesamtsumme der Einzelprodukte der Nutzwerte und der Gewichtungsfaktor je System, sowohl die technischen Parameter und applikationsspezifischen Aspekte, als auch die subjektiven Bewertungen zu einem Wert vereint. Mit der Ermittlung der jährlich erforderlichen Gesamtaufwendungen für ein Kommunikationssystem bzw. für die anvisierten Kommunikationsaufgaben, in Abhängigkeit der Applikation wird neben dem ermittelten Erwartungsnutzen des Systems, ein weiterer Entscheidungsparameter für die Systemauswahl bereitgestellt. Die anschließende Wahl geeigneter Bezugsgrößen erlaubt die Entscheidungsfindung bzgl. der zur Auswahl stehenden Systeme auf einen Vergleich mit einem Bezugssystem zurückzuführen. Hierbei sind nicht die absoluten Differenzen des Erwartungsnutzen bzw. des jährlichen Gesamtaufwandes von Interesse, sondern vielmehr wie sich das entsprechende System gegenüber dem Normal (Bezugssystem) darstellt. Das heißt, der relative Zuwachs des Erwartungsnutzen bzw. der Gesamtkosten eines jeden Systems ist die entscheidende Kenngröße für die Systemauswahl. Mit dem Eintrag der berechneten relativen Erwartungsnutzen- und Gesamtkostenzuwächse in eine neu entwickelte 4-Quadranten-Matrix kann unter Berücksichtigung der Lage der korrespondierenden Wertepaare eine einfache (grafische) Entscheidung bzgl. der Wahl des für die Applikation optimalsten Kommunikationssystems erfolgen. Eine exemplarisch durchgeführte Systemauswahl, basierend auf den Analyseergebnissen von Kommunikationssystemen für den Einsatz in dezentralen elektrischen Versorgungsstrukturen, veranschaulicht und verifiziert die Handhabung des entwickelten Konzeptes. Die abschließende Realisierung, Modifikation und Test des zuvor ausgewählten Distribution Line Carrier Systems unterstreicht des Weiteren die Effizienz des entwickelten Entscheidungsfindungsprozesses. Dem Entscheidungsträger für die Systemauswahl wird insgesamt ein Werkzeug zur Verfügung gestellt, das eine einfache und praktikable Entscheidungsfindung erlaubt. Mit dem entwickelten Konzept ist erstmals eine ganzheitliche Betrachtung unter Berücksichtigung sowohl der technischen und applikationsspezifischen, als auch der ökonomischen Aspekte und Randbedingungen möglich, wobei das Entscheidungsfindungskonzept nicht nur auf die Systemfindung für dezentrale elektrische Energieversorgungsstrukturen begrenzt ist, sondern auch bei entsprechender Modifikation der Anforderungen, Systemkenngrößen etc. auf andere Applikationsanwendungen übertragen werden.
Resumo:
Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------