957 resultados para realistic neural modeling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a new flame extinction model based on the k/epsilon turbulence time scale concept is proposed to predict the flame liftoff heights over a wide range of coflow temperature and O-2 mass fraction of the coflow. The flame is assumed to be quenched, when the fluid time scale is less than the chemical time scale ( Da < 1). The chemical time scale is derived as a function of temperature, oxidizer mass fraction, fuel dilution, velocity of the jet and fuel type. The present extinction model has been tested for a variety of conditions: ( a) ambient coflow conditions ( 1 atm and 300 K) for propane, methane and hydrogen jet flames, ( b) highly preheated coflow, and ( c) high temperature and low oxidizer concentration coflow. Predicted flame liftoff heights of jet diffusion and partially premixed flames are in excellent agreement with the experimental data for all the simulated conditions and fuels. It is observed that flame stabilization occurs at a point near the stoichiometric mixture fraction surface, where the local flow velocity is equal to the local flame propagation speed. The present method is used to determine the chemical time scale for the conditions existing in the mild/ flameless combustion burners investigated by the authors earlier. This model has successfully predicted the initial premixing of the fuel with combustion products before the combustion reaction initiates. It has been inferred from these numerical simulations that fuel injection is followed by intense premixing with hot combustion products in the primary zone and combustion reaction follows further downstream. Reaction rate contours suggest that reaction takes place over a large volume and the magnitude of the combustion reaction is lower compared to the conventional combustion mode. The appearance of attached flames in the mild combustion burners at low thermal inputs is also predicted, which is due to lower average jet velocity and larger residence times in the near injection zone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increased emphasis on rotorcraft performance and perational capabilities has resulted in accurate computation of aerodynamic stability and control parameters. System identification is one such tool in which the model structure and parameters such as aerodynamic stability and control derivatives are derived. In the present work, the rotorcraft aerodynamic parameters are computed using radial basis function neural networks (RBFN) in the presence of both state and measurement noise. The effect of presence of outliers in the data is also considered. RBFN is found to give superior results compared to finite difference derivatives for noisy data. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solar UV radiation is harmful for life on planet Earth, but fortunately the atmospheric oxygen and ozone absorb almost entirely the most energetic UVC radiation photons. However, part of the UVB radiation and much of the UVA radiation reaches the surface of the Earth, and affect human health, environment, materials and drive atmospheric and aquatic photochemical processes. In order to quantify these effects and processes there is a need for ground-based UV measurements and radiative transfer modeling to estimate the amounts of UV radiation reaching the biosphere. Satellite measurements with their near-global spatial coverage and long-term data conti-nuity offer an attractive option for estimation of the surface UV radiation. This work focuses on radiative transfer theory based methods used for estimation of the UV radiation reaching the surface of the Earth. The objectives of the thesis were to implement the surface UV algorithm originally developed at NASA Goddard Space Flight Center for estimation of the surface UV irradiance from the meas-urements of the Dutch-Finnish built Ozone Monitoring Instrument (OMI), to improve the original surface UV algorithm especially in relation with snow cover, to validate the OMI-derived daily surface UV doses against ground-based measurements, and to demonstrate how the satellite-derived surface UV data can be used to study the effects of the UV radiation. The thesis consists of seven original papers and a summary. The summary includes an introduction of the OMI instrument, a review of the methods used for modeling of the surface UV using satellite data as well as the con-clusions of the main results of the original papers. The first two papers describe the algorithm used for estimation of the surface UV amounts from the OMI measurements as well as the unique Very Fast Delivery processing system developed for processing of the OMI data received at the Sodankylä satellite data centre. The third and the fourth papers present algorithm improvements related to the surface UV albedo of the snow-covered land. Fifth paper presents the results of the comparison of the OMI-derived daily erythemal doses with those calculated from the ground-based measurement data. It gives an estimate of the expected accuracy of the OMI-derived sur-face UV doses for various atmospheric and other conditions, and discusses the causes of the differences between the satellite-derived and ground-based data. The last two papers demonstrate the use of the satellite-derived sur-face UV data. Sixth paper presents an assessment of the photochemical decomposition rates in aquatic environment. Seventh paper presents use of satellite-derived daily surface UV doses for planning of the outdoor material weathering tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Properties of nanoparticles are size dependent, and a model to predict particle size is of importance. Gold nanoparticles are commonly synthesized by reducing tetrachloroauric acid with trisodium citrate, a method pioneered by Turkevich et al (Discuss. Faraday Soc. 1951, 11, 55). Data from several investigators that used this method show that when the ratio of initial concentrations of citrate to gold is varied from 0.4 to similar to 2, the final mean size of the particles formed varies by a factor of 7, while subsequent increases in the ratio hardly have any effect on the size. In this paper, a model is developed to explain this widely varying dependence. The steps that lead to the formation of particles are as follows: reduction of Au3+ in solution, disproportionation of Au+ to gold atoms and their nucleation, growth by disproportionation on particle surface, and coagulation. Oxidation of citrate results in the formation of dicarboxy acetone, which aids nucleation but also decomposes into side products. A detailed kinetic model is developed on the basis of these steps and is combined with population balance to predict particle-size distribution. The model shows that, unlike the usual balance between nucleation and growth that determines the particle size, it is the balance between rate of nucleation and degradation of dicarboxy acetone that determines the particle size in the citrate process. It is this feature that is able to explain the unusual dependence of the mean particle size on the ratio of citrate to gold salt concentration. It is also found that coagulation plays an important role in determining the particle size at high concentrations of citrate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with the experimental and modeling studies on the smoldering rates of incense sticks as a function of ambient oxygen fraction in air, the flow velocity and size. The experimental results are obtained both for forward and reverse smolder conditions. The results are explained on the basis of surface combustion due to diffusion of oxygen to the surface by both free and forced convection supporting the heat transfer into the solid by conduction, into the stream by convection and the radiant heat transfer from the surface. The heat release at the surface is controlled by the convective transport of the oxidizer to the surface. To obtain the diffusion rates particularly for the reverse smolder, CFD calculations of fluid flow with along with a passive scalar are needed; these calculations have been made both for forward and reverse smolder. The interesting aspect of the CFD calculations is that while the Nusselt umber for forward smolder shows a clear root( Re-u) dependence ( Re-u = Flow Reynolds Number), the result for reverse smolder shows a peak in the variation with Reynolds number with the values lower than for forward smolder and unsteadiness in the flow beyond a certain flow rate. The results of flow behavior and Nusselt number are used in a simple model for the heat transfer at the smoldering surface to obtain the dependence of the smoldering rate on the diameter of the incense stick, the flow rate of air and the oxygen fraction. The results are presented in terms of a correlation for the non-dimensional smoldering rate with radiant flux from the surface and heat generation rate at the surface. The correlations appear reasonable for both forward and reverse smolder cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on a search for the standard model Higgs boson produced in association with a $W$ or $Z$ boson in $p\bar{p}$ collisions at $\sqrt{s} = 1.96$ TeV recorded by the CDF II experiment at the Tevatron in a data sample corresponding to an integrated luminosity of 2.1 fb$^{-1}$. We consider events which have no identified charged leptons, an imbalance in transverse momentum, and two or three jets where at least one jet is consistent with originating from the decay of a $b$ hadron. We find good agreement between data and predictions. We place 95% confidence level upper limits on the production cross section for several Higgs boson masses ranging from 110$\gevm$ to 150$\gevm$. For a mass of 115$\gevm$ the observed (expected) limit is 6.9 (5.6) times the standard model prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

11β-hydroksisteroididehydrogenaasientsyymit (11β-HSD) 1 ja 2 säätelevät kortisonin ja kortisolin määrää kudoksissa. 11β-HSD1 -entsyymin ylimäärä erityisesti viskeraalisessa rasvakudoksessa aiheuttaa metaboliseen oireyhtymän klassisia oireita, mikä tarjoaa mahdollisuuden metabolisen oireyhtymän hoitoon 11β-HSD1 -entsyymin selektiivisellä estämisellä. 11β-HSD2 -entsyymin inhibitio aiheuttaa kortisonivälitteisen mineralokortikoidireseptorien aktivoitumisen, mikä puolestaan johtaa hypertensiivisiin haittavaikutuksiin. Haittavaikutuksista huolimatta 11β-HSD2 -entsyymin estäminen saattaa olla hyödyllistä tilanteissa, joissa halutaan nostaa kortisolin määrä elimistössä. Lukuisia selektiivisiä 11β-HSD1 inhibiittoreita on kehitetty, mutta 11β-HSD2-inhibiittoreita on raportoitu vähemmän. Ero näiden kahden isotsyymin aktiivisen kohdan välillä on myös tuntematon, mikä vaikeuttaa selektiivisten inhibiittoreiden kehittämistä kummallekin entsyymille. Tällä työllä oli kaksi tarkoitusta: (1) löytää ero 11β-HSD entsyymien välillä ja (2) kehittää farmakoforimalli, jota voitaisiin käyttää selektiivisten 11β-HSD2 -inhibiittoreiden virtuaaliseulontaan. Ongelmaa lähestyttiin tietokoneavusteisesti: homologimallinnuksella, pienmolekyylien telakoinnilla proteiiniin, ligandipohjaisella farmakoforimallinnuksella ja virtuaaliseulonnalla. Homologimallinnukseen käytettiin SwissModeler -ohjelmaa, ja luotu malli oli hyvin päällekäinaseteltavissa niin templaattinsa (17β-HSD1) kuin 11β-HSD1 -entsyymin kanssa. Eroa entsyymien välillä ei löytynyt tarkastelemalla päällekäinaseteltuja entsyymejä. Seitsemän yhdistettä, joista kuusi on 11β-HSD2 -selektiivisiä, telakoitiin molempiin entsyymeihin käyttäen ohjelmaa GOLD. 11β-HSD1 -entsyymiin yhdisteet kiinnittyivät kuten suurin osa 11β-HSD1 -selektiivisistä tai epäselektiivisistä inhibiittoreista, kun taas 11β-HSD2 -entsyymiin kaikki yhdisteet olivat telakoituneet käänteisesti. Tällainen sitoutumistapa mahdollistaa vetysidokset Ser310:een ja Asn171:een, aminohappoihin, jotka olivat nähtävissä vain 11β-HSD2 -entsyymissä. Farmakoforimallinnukseen käytettiin ohjelmaa LigandScout3.0, jolla ajettiin myös virtuaaliseulonnat. Luodut kaksi farmakoforimallia, jotka perustuivat aiemmin telakointiinkin käytettyihin kuuteen 11β-HSD2 -selektiiviseen yhdisteeseen, koostuivat kuudesta ominaisuudesta (vetysidosakseptori, vetysidosdonori ja hydrofobinen), ja kieltoalueista. 11β-HSD2 -selektiivisyyden kannalta tärkeimmät ominaisuudet ovat vetysidosakseptori, joka voi muodostaa sidoksen Ser310 kanssa ja vetysidosdonori sen vieressä. Tälle vetysidosdonorille ei löytynyt vuorovaikutusparia 11β-HSD2-mallista. Sopivasti proteiiniin orientoitunut vesimolekyyli voisi kuitenkin olla sopiva ratkaisu puuttuvalle vuorovaikutusparille. Koska molemmat farmakoforimallit löysivät 11β-HSD2 -selektiivisiä yhdisteitä ja jättivät epäselektiivisiä pois testiseulonnassa, käytettiin molempia malleja Innsbruckin yliopistossa säilytettävistä yhdisteistä (2700 kappaletta) koostetun tietokannan seulontaan. Molemmista seulonnoista löytyneistä hiteistä valittiin yhteensä kymmenen kappaletta, jotka lähetettiin biologisiin testeihin. Biologisien testien tulokset vahvistavat lopullisesti sen kuinka hyvin luodut mallit edustavat todellisuudessa 11β-HSD2 -selektiivisyyttä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a search for standard model Higgs boson production in association with a W boson in proton-antiproton collisions at a center of mass energy of 1.96 TeV. The search employs data collected with the CDF II detector that correspond to an integrated luminosity of approximately 1.9 inverse fb. We select events consistent with a signature of a single charged lepton, missing transverse energy, and two jets. Jets corresponding to bottom quarks are identified with a secondary vertex tagging method, a jet probability tagging method, and a neural network filter. We use kinematic information in an artificial neural network to improve discrimination between signal and background compared to previous analyses. The observed number of events and the neural network output distributions are consistent with the standard model background expectations, and we set 95% confidence level upper limits on the production cross section times branching fraction ranging from 1.2 to 1.1 pb or 7.5 to 102 times the standard model expectation for Higgs boson masses from 110 to $150 GeV/c^2, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop an alternate characterization of the statistical distribution of the inter-cell interference power observed in the uplink of CDMA systems. We show that the lognormal distribution better matches the cumulative distribution and complementary cumulative distribution functions of the uplink interference than the conventionally assumed Gaussian distribution and variants based on it. This is in spite of the fact that many users together contribute to uplink interference, with the number of users and their locations both being random. Our observations hold even in the presence of power control and cell selection, which have hitherto been used to justify the Gaussian distribution approximation. The parameters of the lognormal are obtained by matching moments, for which detailed analytical expressions that incorporate wireless propagation, cellular layout, power control, and cell selection parameters are developed. The moment-matched lognormal model, while not perfect, is an order of magnitude better in modeling the interference power distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed mechanics based model is developed to analyze the problem of structural instability in slender aerospace vehicles. Coupling among the rigid-body modes, the longitudinal vibrational modes and the transverse vibrational modes due to asymmetric lifting-body cross-section are considered. The model also incorporates the effects of aerodynamic pressure and the propulsive thrust of the vehicle. The model is one-dimensional, and it can be employed to idealized slender vehicles with complex shapes. Condition under which a flexible body with internal stress waves behaves like a perfect rigid body is derived. Two methods are developed for finite element discretization of the system: (1) A time-frequency Fourier spectral finite element method and (2) h-p finite element method. Numerical results using the above methods are presented in Part II of this paper. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.