963 resultados para Temporal density
Resumo:
Temperature- and density-dependent vibrational relaxation data for the v6 asymmetric stretch of W(CO)6 in supercritical fluoroform (trifluoromethane, CHF3) are presented and compared to a recent theory of solute vibrational relaxation. The theory, which uses thermodynamic and hydrodynamic conditions of the solvent as input parameters, shows very good agreement in reproducing the temperature- and density-dependent trends of the experimental data with a minimum of adjustable parameters. Once a small number of parameters are fixed by fitting the functional form of the density dependence, there are no adjustable parameters in the calculations of the temperature dependence. © 2001 American Institute of Physics.
Resumo:
We study the nature of excited states of long polyacene oligomers within a Pariser-Parr-Pople (PPP) Hamiltonian using the Symmetrized Density Matrix Renormalization Group (SDMRG) technique. We find a crossover between the two-photon state and the lowest dipole allowed excited state as the system size is increased from tetracene to pentacene. The spin-gap is the smallest gap. We also study the equilibrium geome tries in the ground and excited states from bond orders and bond-bond correlation functions. We find that the Peierls instability in the ground state of polyacene is conditional both from energetics and structure factors computed froth correlation functions.
Resumo:
Current-voltage (I-V) and impedance measurements were carried out in doped poly(3-methylthiophene) devices by varying the carrier density. As the carrier concentration reduces the I-V characteristics indicate that the conduction mechanism is limited by metal-polymer interface, as also observed in impedance data. The temperature dependence of I-V in moderately doped samples shows a trap-controlled space-charge-limited conduction (SCLC); whereas in lightly doped devices injection-limited conduction is observed at lower bias and SCLC at higher voltages. The carrier density-dependent quasi-Fermi level adjustment and trap-limited transport could explain this variation in conduction mechanism. Capacitance measurements at lower frequencies and higher bias voltages show a sign change in values due to the significant variations in the relaxation behaviour for lightly and moderately doped samples. The electrical hysteresis increases as carrier density is reduced due to the time scales involved in the de-trapping of carriers.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
Spatial Decision Support System (SDSS) assist in strategic decision-making activities considering spatial and temporal variables, which help in Regional planning. WEPA is a SDSS designed for assessment of wind potential spatially. A wind energy system transforms the kinetic energy of the wind into mechanical or electrical energy that can be harnessed for practical use. Wind energy can diversify the economies of rural communities, adding to the tax base and providing new types of income. Wind turbines can add a new source of property value in rural areas that have a hard time attracting new industry. Wind speed is extremely important parameter for assessing the amount of energy a wind turbine can convert to electricity: The energy content of the wind varies with the cube (the third power) of the average wind speed. Estimation of the wind power potential for a site is the most important requirement for selecting a site for the installation of a wind electric generator and evaluating projects in economic terms. It is based on data of the wind frequency distribution at the site, which are collected from a meteorological mast consisting of wind anemometer and a wind vane and spatial parameters (like area available for setting up wind farm, landscape, etc.). The wind resource is governed by the climatology of the region concerned and has large variability with reference to space (spatial expanse) and time (season) at any fixed location. Hence the need to conduct wind resource surveys and spatial analysis constitute vital components in programs for exploiting wind energy. SDSS for assessing wind potential of a region / location is designed with user friendly GUI’s (Graphic User Interface) using VB as front end with MS Access database (backend). Validation and pilot testing of WEPA SDSS has been done with the data collected for 45 locations in Karnataka based on primary data at selected locations and data collected from the meteorological observatories of the India Meteorological Department (IMD). Wind energy and its characteristics have been analysed for these locations to generate user-friendly reports and spatial maps. Energy Pattern Factor (EPF) and Power Densities are computed for sites with hourly wind data. With the knowledge of EPF and mean wind speed, mean power density is computed for the locations with only monthly data. Wind energy conversion systems would be most effective in these locations during May to August. The analyses show that coastal and dry arid zones in Karnataka have good wind potential, which if exploited would help local industries, coconut and areca plantations, and agriculture. Pre-monsoon availability of wind energy would help in irrigating these orchards, making wind energy a desirable alternative.
Resumo:
Urbanisation is the increase in the population of cities in proportion to the region's rural population. Urbanisation in India is very rapid with urban population growing at around 2.3 percent per annum. Urban sprawl refers to the dispersed development along highways or surrounding the city and in rural countryside with implications such as loss of agricultural land, open space and ecologically sensitive habitats. Sprawl is thus a pattern and pace of land use in which the rate of land consumed for urban purposes exceeds the rate of population growth resulting in an inefficient and consumptive use of land and its associated resources. This unprecedented urbanisation trend due to burgeoning population has posed serious challenges to the decision makers in the city planning and management process involving plethora of issues like infrastructure development, traffic congestion, and basic amenities (electricity, water, and sanitation), etc. In this context, to aid the decision makers in following the holistic approaches in the city and urban planning, the pattern, analysis, visualization of urban growth and its impact on natural resources has gained importance. This communication, analyses the urbanisation pattern and trends using temporal remote sensing data based on supervised learning using maximum likelihood estimation of multivariate normal density parameters and Bayesian classification approach. The technique is implemented for Greater Bangalore – one of the fastest growing city in the World, with Landsat data of 1973, 1992 and 2000, IRS LISS-3 data of 1999, 2006 and MODIS data of 2002 and 2007. The study shows that there has been a growth of 466% in urban areas of Greater Bangalore across 35 years (1973 to 2007). The study unravels the pattern of growth in Greater Bangalore and its implication on local climate and also on the natural resources, necessitating appropriate strategies for the sustainable management.
Resumo:
Bangalore is experiencing unprecedented urbanisation in recent times due to concentrated developmental activities with impetus on IT (Information Technology) and BT (Biotechnology) sectors. The concentrated developmental activities has resulted in the increase in population and consequent pressure on infrastructure, natural resources, ultimately giving rise to a plethora of serious challenges such as urban flooding, climate change, etc. One of the perceived impact at local levels is the increase in sensible heat flux from the land surface to the atmosphere, which is also referred as heat island effect. In this communication, we report the changes in land surface temperature (LST) with respect to land cover changes during 1973 to 2007. A novel technique combining the information from sub-pixel class proportions with information from classified image (using signatures of the respective classes collected from the ground) has been used to achieve more reliable classification. The analysis showed positive correlation with the increase in paved surfaces and LST. 466% increase in paved surfaces (buildings, roads, etc.) has lead to the increase in LST by about 2 ºC during the last 2 decades, confirming urban heat island phenomenon. LSTs’ were relatively lower (~ 4 to 7 ºC) at land uses such as vegetation (parks/forests) and water bodies which act as heat sinks.
Resumo:
Various logical formalisms with the freeze quantifier have been recently considered to model computer systems even though this is a powerful mechanism that often leads to undecidability. In this paper, we study a linear-time temporal logic with past-time operators such that the freeze operator is only used to express that some value from an infinite set is repeated in the future or in the past. Such a restriction has been inspired by a recent work on spatio-temporal logics. We show decidability of finitary and infinitary satisfiability by reduction into the verification of temporal properties in Petri nets. This is a surprising result since the logic is closed under negation, contains future-time and past-time temporal operators and can express the nonce property and its negation. These ingredients are known to lead to undecidability with a more liberal use of the freeze quantifier.
Resumo:
A methodology termed the “filtered density function” (FDF) is developed and implemented for large eddy simulation (LES) of chemically reacting turbulent flows. In this methodology, the effects of the unresolved scalar fluctuations are taken into account by considering the probability density function (PDF) of subgrid scale (SGS) scalar quantities. A transport equation is derived for the FDF in which the effect of chemical reactions appears in a closed form. The influences of scalar mixing and convection within the subgrid are modeled. The FDF transport equation is solved numerically via a Lagrangian Monte Carlo scheme in which the solutions of the equivalent stochastic differential equations (SDEs) are obtained. These solutions preserve the Itô-Gikhman nature of the SDEs. The consistency of the FDF approach, the convergence of its Monte Carlo solution and the performance of the closures employed in the FDF transport equation are assessed by comparisons with results obtained by direct numerical simulation (DNS) and by conventional LES procedures in which the first two SGS scalar moments are obtained by a finite difference method (LES-FD). These comparative assessments are conducted by implementations of all three schemes (FDF, DNS and LES-FD) in a temporally developing mixing layer and a spatially developing planar jet under both non-reacting and reacting conditions. In non-reacting flows, the Monte Carlo solution of the FDF yields results similar to those via LES-FD. The advantage of the FDF is demonstrated by its use in reacting flows. In the absence of a closure for the SGS scalar fluctuations, the LES-FD results are significantly different from those based on DNS. The FDF results show a much closer agreement with filtered DNS results. © 1998 American Institute of Physics.
Resumo:
Arrays of aligned carbon nanotubes (CNTs) have been proposed for different applications, including electrochemical energy storage and shock-absorbing materials. Understanding their mechanical response, in relation to their structural characteristics, is important for tailoring the synthesis method to the different operational conditions of the material. In this paper, we grow vertically aligned CNT arrays using a thermal chemical vapor deposition system, and we study the effects of precursor flow on the structural and mechanical properties of the CNT arrays. We show that the CNT growth process is inhomogeneous along the direction of the precursor flow, resulting in varying bulk density at different points on the growth substrate. We also study the effects of non-covalent functionalization of the CNTs after growth, using surfactant and nanoparticles, to vary the effective bulk density and structural arrangement of the arrays. We find that the stiffness and peak stress of the materials increase approximately linearly with increasing bulk density.
Resumo:
The statistically steady humidity distribution resulting from an interaction of advection, modelled as an uncorrelated random walk of moist parcels on an isentropic surface, and a vapour sink, modelled as immediate condensation whenever the specific humidity exceeds a specified saturation humidity, is explored with theory and simulation. A source supplies moisture at the deep-tropical southern boundary of the domain and the saturation humidity is specified as a monotonically decreasing function of distance from the boundary. The boundary source balances the interior condensation sink, so that a stationary spatially inhomogeneous humidity distribution emerges. An exact solution of the Fokker-Planck equation delivers a simple expression for the resulting probability density function (PDF) of the wate-rvapour field and also the relative humidity. This solution agrees completely with a numerical simulation of the process, and the humidity PDF exhibits several features of interest, such as bimodality close to the source and unimodality further from the source. The PDFs of specific and relative humidity are broad and non-Gaussian. The domain-averaged relative humidity PDF is bimodal with distinct moist and dry peaks, a feature which we show agrees with middleworld isentropic PDFs derived from the ERA interim dataset. Copyright (C) 2011 Royal Meteorological Society
Resumo:
The Packaging Research Center has been developing next generation system-on-a-package (SOP) technology with digital, RF, optical, and sensor functions integrated in a single package/module. The goal of this effort is to develop a platform substrate technology providing very high wiring density and embedded thin film passive and active components using PWB compatible materials and processes. The latest SOP baseline process test vehicle has been fabricated on novel Si-matched CTE, high modulus C-SiC composite core substrates using 10mum thick BCB dielectric films with loss tangent of 0.0008 and dielectric constant of 2.65. A semi-additive plating process has been developed for multilayer microvia build-up using BCB without the use of any vacuum deposition or polishing/CMP processes. PWB and package substrate compatible processes such as plasma surface treatment/desmear and electroless/electrolytic pulse reverse plating was used. The smallest line width and space demonstrated in this paper is 6mum with microvia diameters in the 15-30mum range. This build-up process has also been developed on medium CTE organic laminates including MCL-E-679F from Hitachi Chemical and PTFE laminates with Cu-Invar-Cu core. Embedded decoupling capacitors with capacitance density of >500nF/cm2 have been integrated into the build-up layers using sol-gel synthesized BaTiO3 thin films (200-300nm film thickness) deposited on copper foils and integrated using vacuum lamination and subtractive etch processes. Thin metal alloy resistor films have been integrated into the SOP substrate using two methods: (a) NiCrAlSi thin films (25ohms per square) deposited on copper foils (Gould Electronics) laminated on the build-up layers and two step etch process for resistor definition, and (b) electroless plated Ni-W-P thin films (70 ohms to few Kohms per square) on the BCB dielectric by plasma surface treatment and activation. The electrical design and build-up layer structure along- - with key materials and processes used in the fabrication of the SOP4 test vehicle were presented in this paper. Initial results from the high density wiring and embedded thin film components were also presented. The focus of this paper is on integration of materials, processes and structures in a single package substrate for system-on-a-package (SOP) implementation
Resumo:
We address the problem of estimating the fundamental frequency of voiced speech. We present a novel solution motivated by the importance of amplitude modulation in sound processing and speech perception. The new algorithm is based on a cumulative spectrum computed from the temporal envelope of various subbands. We provide theoretical analysis to derive the new pitch estimator based on the temporal envelope of the bandpass speech signal. We report extensive experimental performance for synthetic as well as natural vowels for both realworld noisy and noise-free data. Experimental results show that the new technique performs accurate pitch estimation and is robust to noise. We also show that the technique is superior to the autocorrelation technique for pitch estimation.