978 resultados para radial distribution functions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os controladores eletrônicos de pulverização visam minimizar a variação das taxas de insumos aplicadas no campo. Eles fazem parte de um sistema de controle, e permitem a compensação da variação de velocidade de deslocamento do pulverizador durante a operação. Há vários tipos de controladores eletrônicos de pulverização disponíveis no mercado e uma das formas de selecionar qual o mais eficiente nas mesmas condições, ou seja, em um mesmo sistema de controle, é quantificar o tempo de resposta do sistema para cada controlador específico. O objetivo desse trabalho foi estimar os tempos de resposta para mudanças de velocidade de um sistema eletrônico de pulverização via modelos de regressão não lineares, estes, resultantes da soma de regressões lineares ponderadas por funções distribuição acumulada. Os dados foram obtidos no Laboratório de Tecnologia de Aplicação, localizado no Departamento de Engenharia de Biossistemas da Escola Superior de Agricultura \"Luiz de Queiroz\", Universidade de São Paulo, no município de Piracicaba, São Paulo, Brasil. Os modelos utilizados foram o logístico e de Gompertz, que resultam de uma soma ponderada de duas regressões lineares constantes com peso dado pela função distribuição acumulada logística e Gumbell, respectivamente. Reparametrizações foram propostas para inclusão do tempo de resposta do sistema de controle nos modelos, com o objetivo de melhorar a interpretação e inferência estatística dos mesmos. Foi proposto também um modelo de regressão não linear difásico que resulta da soma ponderada de regressões lineares constantes com peso dado pela função distribuição acumulada Cauchy seno hiperbólico exponencial. Um estudo de simulação foi feito, utilizando a metodologia de Monte Carlo, para avaliar as estimativas de máxima verossimilhança dos parâmetros do modelo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a massive equilibrium simulation of the three-dimensional Ising spin glass at low temperatures. The Janus special-purpose computer has allowed us to equilibrate, using parallel tempering, L = 32 lattices down to T ≈ 0.64Tc. We demonstrate the relevance of equilibrium finite-size simulations to understand experimental non-equilibrium spin glasses in the thermodynamical limit by establishing a time-length dictionary. We conclude that non-equilibrium experiments performed on a time scale of one hour can be matched with equilibrium results on L ≈ 110 lattices. A detailed investigation of the probability distribution functions of the spin and link overlap, as well as of their correlation functions, shows that Replica Symmetry Breaking is the appropriate theoretical framework for the physically relevant length scales. Besides, we improve over existing methodologies to ensure equilibration in parallel tempering simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We combine multi-wavelength data in the AEGIS-XD and C-COSMOS surveys to measure the typical dark matter halo mass of X-ray selected active galactic nuclei (AGN) [L_X(2–10 keV) > 10^42 erg s^− 1] in comparison with far-infrared selected star-forming galaxies detected in the Herschel/PEP survey (PACS Evolutionary Probe; L_IR > 10^11 L_⊙) and quiescent systems at z ≈ 1. We develop a novel method to measure the clustering of extragalactic populations that uses photometric redshift probability distribution functions in addition to any spectroscopy. This is advantageous in that all sources in the sample are used in the clustering analysis, not just the subset with secure spectroscopy. The method works best for large samples. The loss of accuracy because of the lack of spectroscopy is balanced by increasing the number of sources used to measure the clustering. We find that X-ray AGN, far-infrared selected star-forming galaxies and passive systems in the redshift interval 0.6 < z < 1.4 are found in haloes of similar mass, log M_DMH/(M_⊙ h^−1) ≈ 13.0. We argue that this is because the galaxies in all three samples (AGN, star-forming, passive) have similar stellar mass distributions, approximated by the J-band luminosity. Therefore, all galaxies that can potentially host X-ray AGN, because they have stellar masses in the appropriate range, live in dark matter haloes of log M_DMH/(M_⊙ h^−1) ≈ 13.0 independent of their star formation rates. This suggests that the stellar mass of X-ray AGN hosts is driving the observed clustering properties of this population. We also speculate that trends between AGN properties (e.g. luminosity, level of obscuration) and large-scale environment may be related to differences in the stellar mass of the host galaxies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Clusters of galaxies are expected to be reservoirs of cosmic rays (CRs) that should produce diffuse γ-ray emission due to their hadronic interactions with the intra-cluster medium. The nearby Perseus cool-core cluster, identified as the most promising target to search for such an emission, has been observed with the MAGIC telescopes at very-high energies (VHE, E ≥ 100 GeV) for a total of 253 hr from 2009 to 2014. The active nuclei of NGC 1275, the central dominant galaxy of the cluster, and IC 310, lying at about 0.6º from the centre, have been detected as point-like VHE γ-ray emitters during the first phase of this campaign. We report an updated measurement of the NGC 1275 spectrum, which is described well by a power law with a photon index Γ = 3.6 ± 0.2_(stat) ± 0.2_(syst) between 90 GeV and 1200 GeV. We do not detect any diffuse γ-ray emission from the cluster and so set stringent constraints on its CR population. To bracket the uncertainties over the CR spatial and spectral distributions, we adopt different spatial templates and power-law spectral indexes α. For α = 2.2, the CR-to-thermal pressure within the cluster virial radius is constrained to be ≤ 1 − 2%, except if CRs can propagate out of the cluster core, generating a flatter radial distribution and releasing the CR-to-thermal pressure constraint to ≤ 20%. Assuming that the observed radio mini-halo of Perseus is generated by secondary electrons from CR hadronic interactions, we can derive lower limits on the central magnetic field, B_(0), that depend on the CR distribution. For α = 2.2, B_(0) ≥ 5 − 8 µG, which is below the ∼25 µG inferred from Faraday rotation measurements, whereas for α ≤ 2.1, the hadronic interpretation of the diffuse radio emission contrasts with our γ-ray flux upper limits independently of the magnetic field strength.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have studied the radial dependence of the energy deposition of the secondary electron generated by swift proton beams incident with energies T = 50 keV–5 MeV on poly(methylmethacrylate) (PMMA). Two different approaches have been used to model the electronic excitation spectrum of PMMA through its energy loss function (ELF), namely the extended-Drude ELF and the Mermin ELF. The singly differential cross section and the total cross section for ionization, as well as the average energy of the generated secondary electrons, show sizeable differences at T ⩽ 0.1 MeV when evaluated with these two ELF models. In order to know the radial distribution around the proton track of the energy deposited by the cascade of secondary electrons, a simulation has been performed that follows the motion of the electrons through the target taking into account both the inelastic interactions (via electronic ionizations and excitations as well as electron-phonon and electron trapping by polaron creation) and the elastic interactions. The radial distribution of the energy deposited by the secondary electrons around the proton track shows notable differences between the simulations performed with the extended-Drude ELF or the Mermin ELF, being the former more spread out (and, therefore, less peaked) than the latter. The highest intensity and sharpness of the deposited energy distributions takes place for proton beams incident with T ~ 0.1–1 MeV. We have also studied the influence in the radial distribution of deposited energy of using a full energy distribution of secondary electrons generated by proton impact or using a single value (namely, the average value of the distribution); our results show that differences between both simulations become important for proton energies larger than ~0.1 MeV. The results presented in this work have potential applications in materials science, as well as hadron therapy (due to the use of PMMA as a tissue phantom) in order to properly consider the generation of electrons by proton beams and their subsequent transport and energy deposition through the target in nanometric scales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

NPT and NVT Monte Carlo simulations are applied to models for methane and water to predict the PVT behaviour of these fluids over a wide range of temperatures and pressures. The potential models examined in this paper have previously been presented in the literature with their specific parameters optimised to fit phase coexistence data. The exponential-6 potential for methane gives generally good prediction of PVT behaviour over the full range of temperature and pressures studied with the only significant deviation from experimental data seen at high temperatures and pressures. The NSPCE water model shows very poor prediction of PVT behaviour, particularly at dense conditions. To improve this. the charge separation in the NSPCE model is varied with density. Improvements for vapour and liquid phase PVT predictions are achieved with this variation. No improvement was found in the prediction of the oxygen-oxygen radial distribution by varying charge separation under dense phase conditions. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An overview of neural networks, covering multilayer perceptrons, radial basis functions, constructive algorithms, Kohonen and K-means unupervised algorithms, RAMnets, first and second order training methods, and Bayesian regularisation methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A study of the hydrodynamics and mass transfer characteristics of a liquid-liquid extraction process in a 450 mm diameter, 4.30 m high Rotating Disc Contactor (R.D.C.) has been undertaken. The literature relating to this type of extractor and the relevant phenomena, such as droplet break-up and coalescence, drop mass transfer and axial mixing has been revjewed. Experiments were performed using the system C1airsol-350-acetone-water and the effects of drop size, drop size-distribution and dispersed phase hold-up on the performance of the R.D.C. established. The results obtained for the two-phase system C1airso1-water have been compared with published correlations: since most of these correlations are based on data obtained from laboratory scale R.D.C.'s, a wide divergence was found. The hydrodynamics data from this study have therefore been correlated to predict the drop size and the dispersed phase hold-up and agreement has been obtained with the experimental data to within +8% for the drop size and +9% for the dispersed phase hold-up. The correlations obtained were modified to include terms involving column dimensions and the data have been correlated with the results obtained from this study together with published data; agreement was generally within +17% for drop size and within +14% for the dispersed phase hold-up. The experimental drop size distributions obtained were in excellent agreement with the upper limit log-normal distributions which should therefore be used in preference to other distribution functions. In the calculation of the overall experimental mass transfer coefficient the mean driving force was determined from the concentration profile along the column using Simpson's Rule and a novel method was developed to calculate the overall theoretical mass transfer coefficient Kca1, involving the drop size distribution diagram to determine the volume percentage of stagnant, circulating and oscillating drops in the sample population. Individual mass transfer coefficients were determined for the corresponding droplet state using different single drop mass transfer models. Kca1 was then calculated as the fractional sum of these individual coefficients and their proportions in the drop sample population. Very good agreement was found between the experimental and theoretical overall mass transfer coefficients. Drop sizes under mass transfer conditions were strongly dependant upon the direction of mass transfer. Drop Sizes in the absence of mass transfer were generally larger than those with solute transfer from the continuous to the dispersed phase, but smaller than those with solute transfer in the opposite direction at corresponding phase flowrates and rotor speed. Under similar operating conditions hold-up was also affected by mass transfer; it was higher when solute transfered from the continuous to the dispersed phase and lower when direction was reversed compared with non-mass transfer operation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The article deals with the CFD modelling of fast pyrolysis of biomass in an Entrained Flow Reactor (EFR). The Lagrangian approach is adopted for the particle tracking, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model includes the thermal degradation of biomass to char with simultaneous evolution of gases and tars from a discrete biomass particle. The chemical reactions are represented using a two-stage, semi-global model. The radial distribution of the pyrolysis products is predicted as well as their effect on the particle properties. The convective heat transfer to the surface of the particle is computed using the Ranz-Marshall correlation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes an investigation by the author into the spares operation of compare BroomWade Ltd. Whilst the complete system, including the warehousing and distribution functions, was investigated, the thesis concentrates on the provisioning aspect of the spares supply problem. Analysis of the historical data showed the presence of significant fluctuations in all the measures of system performance. Two Industrial Dynamics simulation models were developed to study this phenomena. The models showed that any fluctuation in end customer demand would be amplified as it passed through the distributor and warehouse stock control systems. The evidence from the historical data available supported this view of the system's operation. The models were utilised to determine which parts of the total system could be expected to exert a critical influence on its performance. The lead time parameters of the supply sector were found to be critical and further study showed that the manner in which the lead time changed with work in progress levels was also an important factor. The problem therefore resolved into the design of a spares manufacturing system. Which exhibited the appropriate dynamic performance characteristics. The gross level of entity presentation, inherent in the Industrial Dynamics methodology, was found to limit the value of these models in the development of detail design proposals. Accordingly, an interacting job shop simulation package was developed to allow detailed evaluation of organisational factors on the performance characteristics of a manufacturing system. The package was used to develop a design for a pilot spares production unit. The need for a manufacturing system to perform successfully under conditions of fluctuating demand is not limited to the spares field. Thus, although the spares exercise provides an example of the approach, the concepts and techniques developed can be considered to have broad application throughout batch manufacturing industry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose is to develop expert systems where by-analogy reasoning is used. Knowledge “closeness” problems are known to frequently emerge in such systems if knowledge is represented by different production rules. To determine a degree of closeness for production rules a distance between predicates is introduced. Different types of distances between two predicate value distribution functions are considered when predicates are “true”. Asymptotic features and interrelations of distances are studied. Predicate value distribution functions are found by empirical distribution functions, and a procedure is proposed for this purpose. An adequacy of obtained distribution functions is tested on the basis of the statistical 2 χ –criterion and a testing mechanism is discussed. A theorem, by which a simple procedure of measurement of Euclidean distances between distribution function parameters is substituted for a predicate closeness determination one, is proved for parametric distribution function families. The proposed distance measurement apparatus may be applied in expert systems when reasoning is created by analogy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Показано, что метод обобщенных интервальных оценок (ОИО), первоначально предназначавшийся для выявления и формализованного представления экспертных знаний об известных с неопределенностью количественных исходных данных моделей интеллектуальных систем поддержки экспертных решений (СПЭР), можно рассматривать как развитие сценарного подхода в теории принятия решений. Предложены процедуры исследования методом ОИО задач с зависимыми параметрами, таких как задача прогнозирования объемов извлекаемых запасов месторождений в зависимости от уровней цены на углеводороды. Установлены аналитические соотношения для функций распределения вероятностей обобщенных равномерных распределений, используемых в сценарном анализе и анализе результирующих показателей моделей включенных в базу моделей СПЭР.