913 resultados para count data models
Resumo:
Purpose - This paper aims to validate a comprehensive aeroelastic analysis for a helicopter rotor with the higher harmonic control aeroacoustic rotor test (HART-II) wind tunnel test data. Design/methodology/approach - Aeroelastic analysis of helicopter rotor with elastic blades based on finite element method in space and time and capable of considering higher harmonic control inputs is carried out. Moderate deflection and coriolis nonlinearities are included in the analysis. The rotor aerodynamics are represented using free wake and unsteady aerodynamic models. Findings - Good correlation between analysis and HART-II wind tunnel test data is obtained for blade natural frequencies across a range of rotating speeds. The basic physics of the blade mode shapes are also well captured. In particular, the fundamental flap, lag and torsion modes compare very well. The blade response compares well with HART-II result and other high-fidelity aeroelastic code predictions for flap and torsion mode. For the lead-lag response, the present analysis prediction is somewhat better than other aeroelastic analyses. Research limitations/implications - Predicted blade response trend with higher harmonic pitch control agreed well with the wind tunnel test data, but usually contained a constant offset in the mean values of lead-lag and elastic torsion response. Improvements in the modeling of the aerodynamic environment around the rotor can help reduce this gap between the experimental and numerical results. Practical implications - Correlation of predicted aeroelastic response with wind tunnel test data is a vital step towards validating any helicopter aeroelastic analysis. Such efforts lend confidence in using the numerical analysis to understand the actual physical behavior of the helicopter system. Also, validated numerical analyses can take the place of time-consuming and expensive wind tunnel tests during the initial stage of the design process. Originality/value - While the basic physics appears to be well captured by the aeroelastic analysis, there is need for improvement in the aerodynamic modeling which appears to be the source of the gap between numerical predictions and HART-II wind tunnel experiments.
Resumo:
Parkinson´s disease (PD) is a debilitating age-related neurological disorder that affects various motor skills and can lead to a loss of cognitive functions. The motor symptoms are the result of the progressive degeneration of dopaminergic neurons within the substantia nigra. The factors that influence the pathogenesis and the progression of the neurodegeneration remain mostly unclear. This study investigated the role of various programmed cell death (PCD) pathways, oxidative stress, and glial cells both in dopaminergic neurodegeneration and in the protective action of various drugs. To this end, we exposed dopaminergic neuroblastoma cells (SH-SY5Y cells) to 6-OHDA, which produces oxidative stress and activates various PCD modalities that result in neuronal degeneration. Additionally, to explore the role of glia, we prepared rat midbrain primary mixed-cell cultures containing both neurons and glial cell types such as microglia and astroglia and then exposed the cultures to either MPP plus or lipopolysaccharide. Our results revealed that 6-OHDA activated several PCD pathways including apoptosis, autophagic stress, lysosomal membrane permeabilization, and perhaps paraptosis in SH-SY5Y cells. Furthermore, we found that minocycline protected SH-SY5Y cells from 6-OHDA by inhibiting both apoptotic and non-apoptotic PCD modalities. We also observed an inconsistent neuroprotective effect of various dietary anti-oxidant compounds against 6-OHDA toxicity in vitro in SH-SY5Y cells. Specifically, quercetin and curcumin exerted neuroprotection only within a narrow concentration range and a limited time frame, whereas resveratrol and epigallocatechin 3-gallate provided no protection whatsoever. Lastly, we found that molecules such as amantadine may delay or even halt the neurodegeneration in primary cell cultures by inhibiting the release of neurotoxic factors from overactivated microglia and by enhancing the pro-survival actions of astroglia. Together these data suggest that the strategy of dampening oxidative species with anti-oxidants is less effective than preventing the production of toxic factors such as oxidative and pro-inflammatory molecules by pathologically activated microglia. This would subsequently prevent the activation of various PCD modalities that cause neuronal degeneration.
Resumo:
Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.
Resumo:
The problem of structural system identification when measurements originate from multiple tests and multiple sensors is considered. An offline solution to this problem using bootstrap particle filtering is proposed. The central idea of the proposed method is the introduction of a dummy independent variable that allows for simultaneous assimilation of multiple measurements in a sequential manner. The method can treat linear/nonlinear structural models and allows for measurements on strains and displacements under static/dynamic loads. Illustrative examples consider measurement data from numerical models and also from laboratory experiments. The results from the proposed method are compared with those from a Kalman filter-based approach and the superior performance of the proposed method is demonstrated. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Computerized tomography is an imaging technique which produces cross sectional map of an object from its line integrals. Image reconstruction algorithms require collection of line integrals covering the whole measurement range. However, in many practical situations part of projection data is inaccurately measured or not measured at all. In such incomplete projection data situations, conventional image reconstruction algorithms like the convolution back projection algorithm (CBP) and the Fourier reconstruction algorithm, assuming the projection data to be complete, produce degraded images. In this paper, a multiresolution multiscale modeling using the wavelet transform coefficients of projections is proposed for projection completion. The missing coefficients are then predicted based on these models at each scale followed by inverse wavelet transform to obtain the estimated projection data.
Resumo:
Competition between seeds within a fruit for parental resources is described using one-locus-two-allele models. While a �normal� allele leads to an equitable distribution of resources between seeds (a situation which also corresponds to the parental optimum), the �selfish� allele is assumed to cause the seed carrying it to usurp a higher proportion of the resources. The outcome of competition between �selfish� alleles is also assumed to lead to an asymmetric distribution of resources, the �winner� being chosen randomly. Conditions for the spread of an initially rare selfish allele and the optimal resource allocation corresponding to the evolutionarily stable strategy, derived for species with n-seeded fruits, are in accordance with expectations based on Hamilton�s inclusive fitness criteria. Competition between seeds is seen to be most intense when there are only two seeds, and decreases with increasing number of seeds, suggesting that two-seeded fruits would be rarer than one-seeded or many-seeded ones. Available data from a large number of plant species are consistent with this prediction of the model.
Resumo:
The soil moisture characteristic (SMC) forms an important input to mathematical models of water and solute transport in the unsaturated-soil zone. Owing to their simplicity and ease of use, texture-based regression models are commonly used to estimate the SMC from basic soil properties. In this study, the performances of six such regression models were evaluated on three soils. Moisture characteristics generated by the regression models were statistically compared with the characteristics developed independently from laboratory and in-situ retention data of the soil profiles. Results of the statistical performance evaluation, while providing useful information on the errors involved in estimating the SMC, also highlighted the importance of the nature of the data set underlying the regression models. Among the models evaluated, the one possessing an underlying data set of in-situ measurements was found to be the best estimator of the in-situ SMC for all the soils. Considerable errors arose when a textural model based on laboratory data was used to estimate the field retention characteristics of unsaturated soils.
Resumo:
We consider the simplest IEEE 802.11 WLAN networks for which analytical models are available and seek to provide an experimental validation of these models. Our experiments include the following cases: (i) two nodes with saturated queues, sending fixed-length UDP packets to each other, and (ii) a TCP-controlled transfer between two nodes. Our experiments are based entirely on Aruba AP-70 access points operating under Linux. We report our observations on certain non-standard behavior of the devices. In cases where the devices adhere to the standards, we find that the results from the analytical models estimate the experimental data with a mean error of 3-5%.
Resumo:
In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.
Resumo:
The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.
Resumo:
Numerical modeling of saturated subsurface flow and transport has been widely used in the past using different numerical schemes such as finite difference and finite element methods. Such modeling often involves discretization of the problem in spatial and temporal scales. The choice of the spatial and temporal scales for a modeling scenario is often not straightforward. For example, a basin-scale saturated flow and transport analysis demands larger spatial and temporal scales than a meso-scale study, which in turn has larger scales compared to a pore-scale study. The choice of spatial-scale is often dictated by the computational capabilities of the modeler as well as the availability of fine-scale data. In this study, we analyze the impact of different spatial scales and scaling procedures on saturated subsurface flow and transport simulations.
Resumo:
A system for temporal data mining includes a computer readable medium having an application configured to receive at an input module a temporal data series having events with start times and end times, a set of allowed dwelling times and a threshold frequency. The system is further configured to identify, using a candidate identification and tracking module, one or more occurrences in the temporal data series of a candidate episode and increment a count for each identified occurrence. The system is also configured to produce at an output module an output for those episodes whose count of occurrences results in a frequency exceeding the threshold frequency.
Resumo:
A careful comparison of the experimental results reported in the literature reveals different variations of the melting temperature even for the same materials. Though there are different theoretical models, thermodynamic model has been extensively used to understand different variations of size-dependent melting of nanoparticles. There are different hypotheses such as homogeneous melting (HMH), liquid nucleation and growth (LNG) and liquid skin melting (LSM) to resolve different variations of melting temperature as reported in the literature. HMH and LNG account for the linear variation where as LSM is applied to understand the nonlinear behaviour in the plot of melting temperature against reciprocal of particle size. However, a bird's eye view reveals that either HMH or LSM has been extensively used by experimentalists. It has also been observed that not a single hypothesis can explain the size-dependent melting in the complete range. Therefore we describe an approach which can predict the plausible hypothesis for a given data set of the size-dependent melting temperature. A variety of data have been analyzed to ascertain the hypothesis and to test the approach.
Resumo:
Since a universally accepted dynamo model of grand minima does not exist at the present time, we concentrate on the physical processes which may be behind the grand minima. After summarizing the relevant observational data, we make the point that, while the usual sources of irregularities of solar cycles may be sufficient to cause a grand minimum, the solar dynamo has to operate somewhat differently from the normal to bring the Sun out of the grand minimum. We then consider three possible sources of irregularities in the solar dynamo: (i) nonlinear effects; (ii) fluctuations in the poloidal field generation process; (iii) fluctuations in the meridional circulation. We conclude that (i) is unlikely to be the cause behind grand minima, but a combination of (ii) and (iii) may cause them. If fluctuations make the poloidal field fall much below the average or make the meridional circulation significantly weaker, then the Sun may be pushed into a grand minimum.