991 resultados para ANN model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The few existing studies on macrobenthic communities of the deep Arctic Ocean report low standing stocks, and confirm a gradient with declining biomass from the slopes down to the basins as commonly reported for deep-sea benthos. In this study we have further investigated the relationship of faunal abundance (N), biomass (B) as well as community production (P) with water depth, geographical latitude and sea ice concentration. The underlying dataset combines legacy data from the past 20 years, as well as recent field studies selected according to standardized quality control procedures. Community P/B and production were estimated using the multi-parameter ANN model developed by Brey (2012). We could confirm the previously described negative relationship of water depth and macrofauna standing stock in the Arctic deep-sea. Furthermore, the sea-ice cover increasing with high latitudes, correlated with decreasing abundances of down to < 200 individuals/m**2, biomasses of < 65 mg C/m**2 and P of < 75 mg C/m**2/y. Stations under influence of the seasonal ice zone (SIZ) showed much higher standing stock and P means between 400 - 1400 mg C/m**2/y; even at depths up to 3700 m. We conclude that particle flux is the key factor structuring benthic communities in the deep Arctic ocean, explaining both the low values in the ice-covered Arctic basins and the high values along the SIZ.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the accurate characterization of the reflection coefficients of a multilayered reflectarray element by means of artificial neural networks. The procedure has been tested with different RA elements related to actual specifications. Up to 9 parameters were considered and the complete reflection coefficient matrix was accurately obtained, including cross polar reflection coefficients. Results show a good agreement between simulations carried out by the Method of Moments and the ANN model outputs at RA element level, as well as with performances of the complete RA antenna designed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis considers two basic aspects of impact damage in composite materials, namely damage severity discrimination and impact damage location by using Acoustic Emissions (AE) and Artificial Neural Networks (ANNs). The experimental work embodies a study of such factors as the application of AE as Non-destructive Damage Testing (NDT), and the evaluation of ANNs modelling. ANNs, however, played an important role in modelling implementation. In the first aspect of the study, different impact energies were used to produce different level of damage in two composite materials (T300/914 and T800/5245). The impacts were detected by their acoustic emissions (AE). The AE waveform signals were analysed and modelled using a Back Propagation (BP) neural network model. The Mean Square Error (MSE) from the output was then used as a damage indicator in the damage severity discrimination study. To evaluate the ANN model, a comparison was made of the correlation coefficients of different parameters, such as MSE, AE energy, AE counts, etc. MSE produced an outstanding result based on the best performance of correlation. In the second aspect, a new artificial neural network model was developed to provide impact damage location on a quasi-isotropic composite panel. It was successfully trained to locate impact sites by correlating the relationship between arriving time differences of AE signals at transducers located on the panel and the impact site coordinates. The performance of the ANN model, which was evaluated by calculating the distance deviation between model output and real location coordinates, supports the application of ANN as an impact damage location identifier. In the study, the accuracy of location prediction decreased when approaching the central area of the panel. Further investigation indicated that this is due to the small arrival time differences, which defect the performance of ANN prediction. This research suggested increasing the number of processing neurons in the ANNs as a practical solution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since wind at the earth's surface has an intrinsically complex and stochastic nature, accurate wind power forecasts are necessary for the safe and economic use of wind energy. In this paper, we investigated a combination of numeric and probabilistic models: a Gaussian process (GP) combined with a numerical weather prediction (NWP) model was applied to wind-power forecasting up to one day ahead. First, the wind-speed data from NWP was corrected by a GP, then, as there is always a defined limit on power generated in a wind turbine due to the turbine controlling strategy, wind power forecasts were realized by modeling the relationship between the corrected wind speed and power output using a censored GP. To validate the proposed approach, three real-world datasets were used for model training and testing. The empirical results were compared with several classical wind forecast models, and based on the mean absolute error (MAE), the proposed model provides around 9% to 14% improvement in forecasting accuracy compared to an artificial neural network (ANN) model, and nearly 17% improvement on a third dataset which is from a newly-built wind farm for which there is a limited amount of training data. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since wind has an intrinsically complex and stochastic nature, accurate wind power forecasts are necessary for the safety and economics of wind energy utilization. In this paper, we investigate a combination of numeric and probabilistic models: one-day-ahead wind power forecasts were made with Gaussian Processes (GPs) applied to the outputs of a Numerical Weather Prediction (NWP) model. Firstly the wind speed data from NWP was corrected by a GP. Then, as there is always a defined limit on power generated in a wind turbine due the turbine controlling strategy, a Censored GP was used to model the relationship between the corrected wind speed and power output. To validate the proposed approach, two real world datasets were used for model construction and testing. The simulation results were compared with the persistence method and Artificial Neural Networks (ANNs); the proposed model achieves about 11% improvement in forecasting accuracy (Mean Absolute Error) compared to the ANN model on one dataset, and nearly 5% improvement on another.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been recognised that brands play a role in industrial markets, but to date a comprehensive model of business-to-business (B2B) branding does not exist, nor has there been an empirical study of the applicability of a full brand equity model in a B2B context. This paper is the first to begin to address these issues. The paper introduces the Customer- Based Brand Equity (CBBE) model by Kevin Keller (1993; 2001; 2003), and empirically tests its applicability in the market of electronic tracking systems for waste management. While Keller claims that the CBBE pyramid can be applied in a B2B context, this research highlights challenges of such an application, and suggests changes to the model are required. Assessing the equity of manufacturers’ brand names is more appropriate than measuring the equity of individual product brands as suggested by Keller. Secondly, the building blocks of Keller’s model appear useful in an organisational context, although differences in the subdimensions are required. Brand feelings appear to lack relevance in the industrial market investigated, and the pinnacle of Keller’s pyramid, resonance, needs serious modifications. Finally, company representatives play a role in building brand equity, indicating a need for this human element to be recognised in a B2B model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The importance of branding in industrial contexts has increased, yet a comprehensive model of business-to-business (B2B) branding does not exist, nor has there been a thoroughempirical study of the applicability of a full brand equitymodel in a B2B context. This paper aims to discuss the suitability and limitations of Keller’s customer-based brand equity model and tests its applicability in a B2B market. Design/methodology/approach – The study involved the use of semi-structured interviews with senior buyers of technology for electronic tracking of waste management. Findings – Findings suggest that amongst organisational buyers there is a much greater emphasis on the selling organisation, including its corporate brand, credibility and staff, than on individual brands and their associated dimensions. Research limitations/implications – The study investigates real brands with real potential buyers, so there is a risk that the results may represent industry-specific factors that are not representative of all B2B markets. Future research that validates the importance of the Keller elements in other industrial marketing contexts would be beneficial. Practical implications – The findings are relevant for marketing practitioners, researchers and managers as a starting-point for their B2B brand equity research. Originality/value – Detailed insights and key lessons from the field with regard to how B2B brand equity should be conceptualised and measured are offered. A revised brand equity model for B2B application is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, well-established clinical therapeutic approaches for bone reconstruction are restricted to the transplantation of autografts and allografts, and the implantation of metal devices or ceramic-based implants to assist bone regeneration. Bone grafts possess osteoconductive and osteoinductive properties, however they are limited in access and availability and associated with donor site morbidity, haemorrhage, risk of infection, insufficient transplant integration, graft devitalisation, and subsequent resorption resulting in decreased mechanical stability. As a result, recent research focuses on the development of alternative therapeutic concepts. The field of tissue engineering has emerged as an important approach to bone regeneration. However, bench to bedside translations are still infrequent as the process towards approval by regulatory bodies is protracted and costly, requiring both comprehensive in vitro and in vivo studies. The subsequent gap between research and clinical translation, hence commercialization, is referred to as the ‘Valley of Death’ and describes a large number of projects and/or ventures that are ceased due to a lack of funding during the transition from product/technology development to regulatory approval and subsequently commercialization. One of the greatest difficulties in bridging the Valley of Death is to develop good manufacturing processes (GMP) and scalable designs and to apply these in pre-clinical studies. In this article, we describe part of the rationale and road map of how our multidisciplinary research team has approached the first steps to translate orthopaedic bone engineering from bench to bedside byestablishing a pre-clinical ovine critical-sized tibial segmental bone defect model and discuss our preliminary data relating to this decisive step.