1000 resultados para neural source
Resumo:
This paper presents a Reinforcement Learning (RL) approach to economic dispatch (ED) using Radial Basis Function neural network. We formulate the ED as an N stage decision making problem. We propose a novel architecture to store Qvalues and present a learning algorithm to learn the weights of the neural network. Even though many stochastic search techniques like simulated annealing, genetic algorithm and evolutionary programming have been applied to ED, they require searching for the optimal solution for each load demand. Also they find limitation in handling stochastic cost functions. In our approach once we learn the Q-values, we can find the dispatch for any load demand. We have recently proposed a RL approach to ED. In that approach, we could find only the optimum dispatch for a set of specified discrete values of power demand. The performance of the proposed algorithm is validated by taking IEEE 6 bus system, considering transmission losses
Resumo:
The paper investigates the feasibility of implementing an intelligent classifier for noise sources in the ocean, with the help of artificial neural networks, using higher order spectral features. Non-linear interactions between the component frequencies of the noise data can give rise to certain phase relations called Quadratic Phase Coupling (QPC), which cannot be characterized by power spectral analysis. However, bispectral analysis, which is a higher order estimation technique, can reveal the presence of such phase couplings and provide a measure to quantify such couplings. A feed forward neural network has been trained and validated with higher order spectral features
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.
Resumo:
The assessment of maturity of software is an important area in the general software sector. The field of OSS also applies various models to measure software maturity. However, measuring maturity of OSS being used for several applications in libraries is an area left with no research so far. This study has attempted to fill the research gap. Measuring maturity of software contributes knowledge on its sustainability over the long term. Maturity of software is one of the factors that positively influence adoption. The investigator measured the maturity of DSpace software using Woods and Guliani‟s Open Source Maturity Model-2005. The present study is significant as it addresses the aspects of maturity of OSS for libraries and fills the research gap on the area. In this sense the study opens new avenues to the field of library and information science by providing an additional tool for librarians in the selection and adoption of OSS. Measuring maturity brings in-depth knowledge on an OSS which will contribute towards the perceived usefulness and perceived ease of use as explained in the Technology Acceptance Model theory.
Resumo:
One of the major problems facing aquaculture is the inadequate supply of fish oil mostly used for fish feed manufacturing. The continued growth in aquaculture production cannot depend on this finite feed resources, therefore, it is imperative that cheap and readily available substitutes that do not compromise fish growth and fillet quality be found. To achieve this, a 12-week feeding trial with Heterobranchus longifilis fed diets differing in lipid source was conducted. Diets were supplemented with 6% lipid as fish oil, soybean oil, palm oil, coconut oil, groundnut oil and melon seed oil. Triplicate groups of 20 H. longifilis were fed the experimental diets two times a day to apparent satiation, over 84 days. Growth, digestibility, and muscle fatty acid profile were measured to assess diet effects. At the end of the study, survival, feed intake and hepatosomatic index were similar for fish fed experimental diets. However, weight gain, SGR and FCR of fish fed soybean oil-based diet was significantly reduced. Apparent nutrient digestibility coefficients were significantly lower in fish fed soybean, coconut and groundnut oil-based diets. Fillet and hepatic fatty acid compositions differed and reflected the fatty acid compositions of the diets. Docosahexaenoic acid (22:6n-3), 20:5n-3 and 20:4n-6 were conserved in vegetable oils-based diets fed fish possibly due to synthesis of HUFA from 18:3n-3 and 18:4n-6. Palm oil diet was the least expensive, and had the best economic conversion ratio. The use of vegetable oils in the diets had positive effect on growth and fillet composition of H. longifilis.
Resumo:
Die thermische Verarbeitung von Lebensmitteln beeinflusst deren Qualität und ernährungsphysiologischen Eigenschaften. Im Haushalt ist die Überwachung der Temperatur innerhalb des Lebensmittels sehr schwierig. Zudem ist das Wissen über optimale Temperatur- und Zeitparameter für die verschiedenen Speisen oft unzureichend. Die optimale Steuerung der thermischen Zubereitung ist maßgeblich abhängig von der Art des Lebensmittels und der äußeren und inneren Temperatureinwirkung während des Garvorgangs. Das Ziel der Arbeiten war die Entwicklung eines automatischen Backofens, der in der Lage ist, die Art des Lebensmittels zu erkennen und die Temperatur im Inneren des Lebensmittels während des Backens zu errechnen. Die für die Temperaturberechnung benötigten Daten wurden mit mehreren Sensoren erfasst. Hierzu kam ein Infrarotthermometer, ein Infrarotabstandssensor, eine Kamera, ein Temperatursensor und ein Lambdasonde innerhalb des Ofens zum Einsatz. Ferner wurden eine Wägezelle, ein Strom- sowie Spannungs-Sensor und ein Temperatursensor außerhalb des Ofens genutzt. Die während der Aufheizphase aufgenommen Datensätze ermöglichten das Training mehrerer künstlicher neuronaler Netze, die die verschiedenen Lebensmittel in die entsprechenden Kategorien einordnen konnten, um so das optimale Backprogram auszuwählen. Zur Abschätzung der thermische Diffusivität der Nahrung, die von der Zusammensetzung (Kohlenhydrate, Fett, Protein, Wasser) abhängt, wurden mehrere künstliche neuronale Netze trainiert. Mit Ausnahme des Fettanteils der Lebensmittel konnten alle Komponenten durch verschiedene KNNs mit einem Maximum von 8 versteckten Neuronen ausreichend genau abgeschätzt werden um auf deren Grundlage die Temperatur im inneren des Lebensmittels zu berechnen. Die durchgeführte Arbeit zeigt, dass mit Hilfe verschiedenster Sensoren zur direkten beziehungsweise indirekten Messung der äußeren Eigenschaften der Lebensmittel sowie KNNs für die Kategorisierung und Abschätzung der Lebensmittelzusammensetzung die automatische Erkennung und Berechnung der inneren Temperatur von verschiedensten Lebensmitteln möglich ist.
Resumo:
The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwater–freshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.
Resumo:
A 12-week experiment was carried out to investigate the effects of substituting Giant African snail meal for fish meal in laying hens diet. Four diets were formulated to contain snail meal as replacement for fish meal at 0 (control), 33, 67 and 100%. A total of 120 Shaver Brown pullets aged 18 weeks were allocated to the dietary treatments in a randomised design. Each treatment consisted of three replicates and ten birds per replicate. Feed intake increased only for the 33% treatment as compared to the 67% replacement diet but did not differ from the other treatments. There were no significant treatment effects on egg performance parameters observed (egg production, egg weight, total egg mass, feed conversion ratio and percent shell). The overall feed cost of egg production reduced on the snail meal-based diets. The organoleptic evaluation of boiled eggs revealed no difference between the treatments. Based on these results it was concluded that total replacement of fish meal with cooked snail meat meal does not compromise laying performance or egg quality. The substitution is beneficial in terms of production cost reduction and the reduction of snails will have a beneficial impact especially where these snails are a serious agricultural pest. The manual collection and processing of snails can also become a source of rural income.
Resumo:
Städtische Biomassen der Grünflächen bilden eine potentielle, bisher weitgehend ungenutzte Ressource für Bioenergie. Kommunen pflegen die Grünflächen, lassen das Material aber verrotten oder führen es Deponien oder Müllverbrennungsanlagen zu. Diese Praxis ist kostenintensiv ohne für die Verwaltungen finanziellen Ausgleich bereitzustellen. Stattdessen könnte das Material energetisch verwertet werden. Zwei mögliche Techniken, um Bioenergie zu gewinnen, wurden mit krautigem Material des städtischen Straßenbegleitgrüns untersucht i) direkte anaerobe Fermentation (4 Schnitte im Jahr) und ii) „Integrierte Festbrennstoff- und Biogasproduktion aus Biomasse“ (IFBB), die Biomasse durch Maischen und mechanisches Entwässern in einen Presssaft und einen Presskuchen trennt (2 Schnitte im Jahr). Als Referenz wurde die aktuelle Pflege ohne Verwertungsoption mitgeführt (8faches Mulchen). Zusätzlich wurde die Eignung von Gras-Laub-Mischungen im IFBB-Verfahren untersucht. Der mittlere Biomasseertrag war 3.24, 3.33 und 5.68 t Trockenmasse ha-1 jeweils für die Pflegeintensitäten Mulchen, 4-Schnitt- und 2-Schnittnutzung. Obwohl die Faserkonzentration in der Biomasse der 2-Schnittnutzung höher war als im Material der 4-Schnittnutzung, unterschieden sich die Methanausbeuten nicht signifikant. Der Presskuchen aus dem krautigen Material des Straßenbegleitgrüns hatte einen Heizwert von 16 MJ kg-1 Trockenmasse, während der Heizwert des Presskuchens der Gras-Laub-Mischung in Abhängigkeit vom Aschegehalt zwischen 15 und 17 MJ kg-1 Trockenmasse lag. Der Aschegehalt der Mischungen war höher als der Grenzwert nach DIN EN 14961-6:2012 (für nicht-holzige Brennstoffe), was auf erhöhte Bodenanhaftung auf Grund der Erntemethoden zurückzuführen sein könnte. Der Aschegehalt des krautigen Materials vom Straßenrand hielt die Norm jedoch ein. Die Elementkonzentration (Ca, Cl, K, Mg, N, Na, P, S, Al, Cd, Cr, Cu, Mn, Pb, Si, Zn) im krautigen Material war generell ähnlich zu Landwirtschafts- oder Naturschutzgrünland. In den Mischungen nahm die Elementkonzentration (Al, Cl, K, N, Na, P, S, Si) mit zunehmendem Laubanteil ab. Die Konzentration von Ca, Mg und der Neutral-Detergenz-Fasern stieg hingegen an. Die IFBB-Technik reduzierte die Konzentrationen der in der Verbrennung besonders schädlichen Elemente Cl, K und N zuverlässig. Außer den potentiell hohen Aschegehalten, wurde während der Untersuchungen kein technischer Grund entdeckt, der einer energetischen Verwertung des getesteten urbanen Materials entgegenstehen würde. Ökonomische, soziale und ökologische Auswirkungen einer Umsetzung müssen beachtet werden. Eine oberflächliche Betrachtung auf Basis des bisherigen Wissens lässt hoffen, dass eine bioenergetische Verwertung städtischen Materials auf allen Ebenen nachhaltig sein könnte.
Resumo:
We investigate the properties of feedforward neural networks trained with Hebbian learning algorithms. A new unsupervised algorithm is proposed which produces statistically uncorrelated outputs. The algorithm causes the weights of the network to converge to the eigenvectors of the input correlation with largest eigenvalues. The algorithm is closely related to the technique of Self-supervised Backpropagation, as well as other algorithms for unsupervised learning. Applications of the algorithm to texture processing, image coding, and stereo depth edge detection are given. We show that the algorithm can lead to the development of filters qualitatively similar to those found in primate visual cortex.
Resumo:
The HMAX model has recently been proposed by Riesenhuber & Poggio as a hierarchical model of position- and size-invariant object recognition in visual cortex. It has also turned out to model successfully a number of other properties of the ventral visual stream (the visual pathway thought to be crucial for object recognition in cortex), and particularly of (view-tuned) neurons in macaque inferotemporal cortex, the brain area at the top of the ventral stream. The original modeling study only used ``paperclip'' stimuli, as in the corresponding physiology experiment, and did not explore systematically how model units' invariance properties depended on model parameters. In this study, we aimed at a deeper understanding of the inner workings of HMAX and its performance for various parameter settings and ``natural'' stimulus classes. We examined HMAX responses for different stimulus sizes and positions systematically and found a dependence of model units' responses on stimulus position for which a quantitative description is offered. Interestingly, we find that scale invariance properties of hierarchical neural models are not independent of stimulus class, as opposed to translation invariance, even though both are affine transformations within the image plane.
Resumo:
We present an overview of current research on artificial neural networks, emphasizing a statistical perspective. We view neural networks as parameterized graphs that make probabilistic assumptions about data, and view learning algorithms as methods for finding parameter values that look probable in the light of the data. We discuss basic issues in representation and learning, and treat some of the practical issues that arise in fitting networks to data. We also discuss links between neural networks and the general formalism of graphical models.