876 resultados para Model-based bootstrap


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A swarm is a temporary structure formed when several thousand honey bees leave their hive and settle on some object such as the branch of a tree. They remain in this position until a suitable site for a new home is located by the scout bees. A continuum model based on heat conduction and heat generation is used to predict temperature profiles in swarms. Since internal convection is neglected, the model is applicable only at low values of the ambient temperature T-a. Guided by the experimental observations of Heinrich (1981a-c, J. Exp. Biol. 91, 25-55; Science 212, 565-566; Sci. Am. 244, 147-160), the analysis is carried out mainly for non-spherical swarms. The effective thermal conductivity is estimated using the data of Heinrich (1981a, J. Exp. Biol. 91, 25-55) for dead bees. For T-a = 5 and 9 degrees C, results based on a modified version of the heat generation function due to Southwick (1991, The Behaviour and Physiology of Bees, PP 28-47. C.A.B. International, London) are in reasonable agreement with measurements. Results obtained with the heat generation function of Myerscough (1993, J. Theor. Biol. 162, 381-393) are qualitatively similar to those obtained with Southwick's function, but the error is more in the former case. The results suggest that the bees near the periphery generate more heat than those near the core, in accord with the conjecture of Heinrich (1981c, Sci. Am. 244, 147-160). On the other hand, for T-a = 5 degrees C, the heat generation function of Omholt and Lonvik (1986, J. Theor. Biol. 120, 447-456) leads to a trivial steady state where the entire swarm is at the ambient temperature. Therefore an acceptable heat generation function must result in a steady state which is both non-trivial and stable with respect to small perturbations. Omholt and Lonvik's function satisfies the first requirement, but not the second. For T-a = 15 degrees C, there is a considerable difference between predicted and measured values, probably due to the neglect of internal convection in the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A one-dimensional, biphasic, multicomponent steady-state model based on phenomenological transport equations for the catalyst layer, diffusion layer, and polymeric electrolyte membrane has been developed for a liquid-feed solid polymer electrolyte direct methanol fuel cell (SPE- DMFC). The model employs three important requisites: (i) implementation of analytical treatment of nonlinear terms to obtain a faster numerical solution as also to render the iterative scheme easier to converge, (ii) an appropriate description of two-phase transport phenomena in the diffusive region of the cell to account for flooding and water condensation/evaporation effects, and (iii) treatment of polarization effects due to methanol crossover. An improved numerical solution has been achieved by coupling analytical integration of kinetics and transport equations in the reaction layer, which explicitly include the effect of concentration and pressure gradient on cell polarization within the bulk catalyst layer. In particular, the integrated kinetic treatment explicitly accounts for the nonhomogeneous porous structure of the catalyst layer and the diffusion of reactants within and between the pores in the cathode. At the anode, the analytical integration of electrode kinetics has been obtained within the assumption of macrohomogeneous electrode porous structure, because methanol transport in a liquid-feed SPE- DMFC is essentially a single-phase process because of the high miscibility of methanol with water and its higher concentration in relation to gaseous reactants. A simple empirical model accounts for the effect of capillary forces on liquid-phase saturation in the diffusion layer. Consequently, diffusive and convective flow equations, comprising Nernst-Plank relation for solutes, Darcy law for liquid water, and Stefan-Maxwell equation for gaseous species, have been modified to include the capillary flow contribution to transport. To understand fully the role of model parameters in simulating the performance of the DMCF, we have carried out its parametric study. An experimental validation of model has also been carried out. (C) 2003 The Electrochemical Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We address the problem of recognition and retrieval of relatively weak industrial signal such as Partial Discharges (PD) buried in excessive noise. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) which has similar time-frequency characteristics as PD pulse. Therefore conventional frequency based DSP techniques are not useful in retrieving PD pulses. We employ statistical signal modeling based on combination of long-memory process and probabilistic principal component analysis (PPCA). An parametric analysis of the signal is exercised for extracting the features of desired pules. We incorporate a wavelet based bootstrap method for obtaining the noise training vectors from observed data. The procedure adopted in this work is completely different from the research work reported in the literature, which is generally based on deserved signal frequency and noise frequency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context-aware computing is useful in providing individualized services focusing mainly on acquiring surrounding context of user. By comparison, only very little research has been completed in integrating context from different environments, despite of its usefulness in diverse applications such as healthcare, M-commerce and tourist guide applications. In particular, one of the most important criteria in providing personalized service in a highly dynamic environment and constantly changing user environment, is to develop a context model which aggregates context from different domains to infer context of an entity at the more abstract level. Hence, the purpose of this paper is to propose a context model based on cognitive aspects to relate contextual information that better captures the observation of certain worlds of interest for a more sophisticated context-aware service. We developed a C-IOB (Context-Information, Observation, Belief) conceptual model to analyze the context data from physical, system, application, and social domains to infer context at the more abstract level. The beliefs developed about an entity (person, place, things) are primitive in most theories of decision making so that applications can use these beliefs in addition to history of transaction for providing intelligent service. We enhance our proposed context model by further classifying context information into three categories: a well-defined, a qualitative and credible context information to make the system more realistic towards real world implementation. The proposed model is deployed to assist a M-commerce application. The simulation results show that the service selection and service delivery of the system are high compared to traditional system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We performed Gaussian network model based normal mode analysis of 3-dimensional structures of multiple active and inactive forms of protein kinases. In 14 different kinases, a more number of residues (1095) show higher structural fluctuations in inactive states than those in active states (525), suggesting that, in general, mobility of inactive states is higher than active states. This statistically significant difference is consistent with higher crystallographic B-factors and conformational energies for inactive than active states, suggesting lower stability of inactive forms. Only a small number of inactive conformations with the DFG motif in the ``in'' state were found to have fluctuation magnitudes comparable to the active conformation. Therefore our study reports for the first time, intrinsic higher structural fluctuation for almost all inactive conformations compared to the active forms. Regions with higher fluctuations in the inactive states are often localized to the aC-helix, aG-helix and activation loop which are involved in the regulation and/or in structural transitions between active and inactive states. Further analysis of 476 kinase structures involved in interactions with another domain/protein showed that many of the regions with higher inactive-state fluctuation correspond to contact interfaces. We also performed extensive GNM analysis of (i) insulin receptor kinase bound to another protein and (ii) holo and apo forms of active and inactive conformations followed by multi-factor analysis of variance. We conclude that binding of small molecules or other domains/proteins reduce the extent of fluctuation irrespective of active or inactive forms. Finally, we show that the perceived fluctuations serve as a useful input to predict the functional state of a kinase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper an explicit guidance law for the powered descent phase of the soft lunar landing is presented. The descent trajectory, expressed in polynomial form is fixed based on the boundary conditions imposed by the precise soft landing mission. Adapting an inverse model based approach, the guidance command is computed from the known spacecraft trajectory. The guidance formulation ensures the vertical orientation of the spacecraft during touchdown. Also a closed form relation for the final flight time is proposed. The final time is expressed as a function of initial position and velocity of the spacecraft ( at the start of descent) and also depends on the desired landing site. To ensure the fuel minimum descent the proposed explicit method is extended to optimal guidance formulation. The effectiveness of the proposed guidance laws are demonstrated with simulation results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A numerical model has been developed for simulating the rapid solidification processing (RSP) of Ni-Al alloy in order to predict the resultant phase composition semi-quantitatively during RSP. The present model couples the initial nucleation temperature evaluating method based on the time dependent nucleation theory, and solidified volume fraction calculation model based on the kinetics model of dendrite growth in undercooled melt. This model has been applied to predict the cooling curve and the volume fraction of solidified phases of Ni-Al alloy in planar flow casting. The numerical results agree with the experimental results semi-quantitatively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A second-order dynamic model based on the general relation between the subgrid-scale stress and the velocity gradient tensors was proposed. A priori test of the second-order model was made using moderate resolution direct numerical simulation date at high Reynolds number ( Taylor microscale Reynolds number R-lambda = 102 similar to 216) for homogeneous, isotropic forced flow, decaying flow, and homogeneous rotating flow. Numerical testing shows that the second-order dynamic model significantly improves the correlation coefficient when compared to the first-order dynamic models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper analyzes the cyclical properties of a generalized version of Uzawa-Lucas endogenous growth model. We study the dynamic features of different cyclical components of this model characterized by a variety of decomposition methods. The decomposition methods considered can be classified in two groups. On the one hand, we consider three statistical filters: the Hodrick-Prescott filter, the Baxter-King filter and Gonzalo-Granger decomposition. On the other hand, we use four model-based decomposition methods. The latter decomposition procedures share the property that the cyclical components obtained by these methods preserve the log-linear approximation of the Euler-equation restrictions imposed by the agent’s intertemporal optimization problem. The paper shows that both model dynamics and model performance substantially vary across decomposition methods. A parallel exercise is carried out with a standard real business cycle model. The results should help researchers to better understand the performance of Uzawa-Lucas model in relation to standard business cycle models under alternative definitions of the business cycle.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations of the highly non-linear model produce a 35-60 Hz modulated activity, which is coherent across the bulb. The decision states (for the odor information) in this system can be thought of as stable cycles, rather than as point stable states typical of simpler neuro-computing models. Analysis shows that a group of coupled non-linear oscillators are responsible for the oscillatory activities. The output oscillation pattern of the bulb is determined by the odor input. The model provides a framework in which to understand the transformation between odor input and bulbar output to the olfactory cortex. This model can also be extended to other brain areas such as the hippocampus, thalamus, and neocortex, which show oscillatory neural activities. There is significant correspondence between the model behavior and observed electrophysiology.

It has also been suggested that the olfactory bulb, the first processing center after the sensory cells in the olfactory pathway, plays a role in olfactory adaptation, odor sensitivity enhancement by motivation, and other olfactory psychophysical phenomena. The input from the higher olfactory centers to the inhibitory cells in the bulb are shown to be able to modulate the response, and thus the sensitivity, of the bulb to odor input. It follows that the bulb can decrease its sensitivity to a pre-existing and detected odor (adaptation) while remaining sensitive to new odors, or can increase its sensitivity to discover interesting new odors. Other olfactory psychophysical phenomena such as cross-adaptation are also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Documento de trabajo

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Steady-state procedures, of their very nature, cannot deal with dynamic situations. Statistical models require extensive calibration, and predictions often have to be made for environmental conditions which are often outside the original calibration conditions. In addition, the calibration requirement makes them difficult to transfer to other lakes. To date, no computer programs have been developed which will successfully predict changes in species of algae. The obvious solution to these limitations is to apply our limnological knowledge to the problem and develop functional models, so reducing the requirement for such rigorous calibration. Reynolds has proposed a model, based on fundamental principles of algal response to environmental events, which has successfully recreated the maximum observed biomass, the timing of events and a fair simulation of the species succession in several lakes. A forerunner of this model was developed jointly with Welsh Water under contract to Messrs. Wallace Evans and Partners, for use in the Cardiff Bay Barrage study. In this paper the authors test a much developed form of this original model against a more complex data-set and, using a simple example, show how it can be applied as an aid in the choice of management strategy for the reduction of problems caused by eutrophication. Some further developments of the model are indicated.