995 resultados para Infinite time


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Partitional clustering algorithms, which partition the dataset into a pre-defined number of clusters, can be broadly classified into two types: algorithms which explicitly take the number of clusters as input and algorithms that take the expected size of a cluster as input. In this paper, we propose a variant of the k-means algorithm and prove that it is more efficient than standard k-means algorithms. An important contribution of this paper is the establishment of a relation between the number of clusters and the size of the clusters in a dataset through the analysis of our algorithm. We also demonstrate that the integration of this algorithm as a pre-processing step in classification algorithms reduces their running-time complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a low-complexity algorithm for detection in high-rate, non-orthogonal space-time block coded (STBC) large-multiple-input multiple-output (MIMO) systems that achieve high spectral efficiencies of the order of tens of bps/Hz. We also present a training-based iterative detection/channel estimation scheme for such large STBC MIMO systems. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed multistage likelihood ascent search (M-LAS) detector in conjunction with the proposed iterative detection/channel estimation scheme at low complexities. The fact that we could show such good results for large STBCs like 16 X 16 and 32 X 32 STBCs from Cyclic Division Algebras (CDA) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot based training for channel estimation and turbo coding) establishes the effectiveness of the proposed detector and channel estimator. We decode perfect codes of large dimensions using the proposed detector. With the feasibility of such a low-complexity detection/channel estimation scheme, large-MIMO systems with tens of antennas operating at several tens of bps/Hz spectral efficiencies can become practical, enabling interesting high data rate wireless applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic systems involving convolution integrals with decaying kernels, of which fractionally damped systems form a special case, are non-local in time and hence infinite dimensional. Straightforward numerical solution of such systems up to time t needs O(t(2)) computations owing to the repeated evaluation of integrals over intervals that grow like t. Finite-dimensional and local approximations are thus desirable. We present here an approximation method which first rewrites the evolution equation as a coupled in finite-dimensional system with no convolution, and then uses Galerkin approximation with finite elements to obtain linear, finite-dimensional, constant coefficient approximations for the convolution. This paper is a broad generalization, based on a new insight, of our prior work with fractional order derivatives (Singh & Chatterjee 2006 Nonlinear Dyn. 45, 183-206). In particular, the decaying kernels we can address are now generalized to the Laplace transforms of known functions; of these, the power law kernel of fractional order differentiation is a special case. The approximation can be refined easily. The local nature of the approximation allows numerical solution up to time t with O(t) computations. Examples with several different kernels show excellent performance. A key feature of our approach is that the dynamic system in which the convolution integral appears is itself approximated using another system, as distinct from numerically approximating just the solution for the given initial values; this allows non-standard uses of the approximation, e. g. in stability analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for special education (SE) is increasing. The majority of those whose problems are due to neurodevelopmental disorders have no specific aetiology. The aim of this study was to evaluate the contribution of prenatal and perinatal factors and factors associated with growth and development to later need for full-time SE and to assess joint structural and volumetric brain alterations among subjects with unexplained, familial need for SE. A random sample of 900 subjects in full-time SE allocated into three levels of neurodevelopmental problems and 301 controls in mainstream education (ME) provided data on socioeconomic factors, pregnancy, delivery, growth, and development. Of those, 119 subjects belonging to a sibling-pair in full-time SE with unexplained aetiology and 43 controls in ME underwent brain magnetic resonance imaging (MRI). Analyses of structural brain alterations and midsagittal area and diameter measurements were made. Voxel-based morphometry (VBM) analysis provided detailed information on regional grey matter, white matter, and cerebrospinal fluid (CSF) volume differences. Father’s age ≥ 40 years, low birth weight, male sex, and lower socio-economic status all increased the probability of SE placement. At age 1 year, one standard deviation score decrease in height raised the probability of SE placement by 40% and in head circumference by 28%. At infancy, the gross motor milestones differentiated the children. From age 18 months, the fine motor milestones and those related to speech and social skills became more important. Brain MRI revealed no specific aetiology for subjects in SE. However, they had more often ≥ 3 abnormal findings in MRIs (thin corpus callosum and enlarged cerebral and cerebellar CSF spaces). In VBM, subjects in full-time SE had smaller global white matter, CSF, and total brain volumes than controls. Compared with controls, subjects with intellectual disabilities had regional volume alterations (greater grey matter volumes in the anterior cingulate cortex bilaterally, smaller grey matter volume in left thalamus and left cerebellar hemisphere, greater white matter volume in the left fronto-parietal region, and smaller white matter volumes bilaterally in the posterior limbs of the internal capsules). In conclusion, the epidemiological studies emphasized several factors that increased the probability of SE placement, useful as a framework for interventional studies. The global and regional brain MRI findings provide an interesting basis for future investigations of learning-related brain structures in young subjects with cognitive impairments or intellectual disabilities of unexplained, familial aetiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a self-regularized pseudo-time marching scheme to solve the ill-posed, nonlinear inverse problem associated with diffuse propagation of coherent light in a tissuelike object. In particular, in the context of diffuse correlation tomography (DCT), we consider the recovery of mechanical property distributions from partial and noisy boundary measurements of light intensity autocorrelation. We prove the existence of a minimizer for the Newton algorithm after establishing the existence of weak solutions for the forward equation of light amplitude autocorrelation and its Frechet derivative and adjoint. The asymptotic stability of the solution of the ordinary differential equation obtained through the introduction of the pseudo-time is also analyzed. We show that the asymptotic solution obtained through the pseudo-time marching converges to that optimal solution provided the Hessian of the forward equation is positive definite in the neighborhood of optimal solution. The superior noise tolerance and regularization-insensitive nature of pseudo-dynamic strategy are proved through numerical simulations in the context of both DCT and diffuse optical tomography. (C) 2010 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The blue emission of ethyl-hexyl substituted polyfluorene (PF2/6) films is accompanied by a low energy green emission peak around 500 nm in inert atmosphere. The intensity of this 500 nm peak is large in electroluminescence (EL) compared to photoluminescence (PL)measurements. Furthermore, the green emission intensity reduces dramatically in the presence of molecular oxygen. To understand this, we have modeled various nonradiative processes by time dependent quantum many body methods. These are (i) intersystem crossing to study conversion of excited singlets to triplets leading to a phosphorescence emission, (ii) electron-hole recombination (e-hR) process in the presence of a paramagnetic impurity to follow the yield of triplets in a polyene system doped with paramagnetic metal atom, and (iii) quenching of excited triplet states in the presence of oxygen molecules to understand the low intensity of EL emission in ambient atmosphere, when compared with that in nitrogen atmosphere. We have employed the Pariser-Parr-Pople Hamiltonian to model the molecules and have invoked electron-electron repulsions beyond zero differential approximation while treating interactions between the organic molecule and the rest of the system. Our time evolution methods show that there is a large cross section for triplet formation in the e-hR process in the presence of paramagnetic impurity with degenerate orbitals. The triplet yield through e-hR process far exceeds that in the intersystem crossing pathway, clearly pointing to the large intensity of the 500 nm peak in EL compared to PL measurements. We have also modeled the triplet quenching process by a paramagnetic oxygen molecule which shows a sizable quenching cross section especially for systems with large sizes. These studies show that the most probable origin of the experimentally observed low energy EL emission is the triplets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spirometry is the most widely used lung function test in the world. It is fundamental in diagnostic and functional evaluation of various pulmonary diseases. In the studies described in this thesis, the spirometric assessment of reversibility of bronchial obstruction, its determinants, and variation features are described in a general population sample from Helsinki, Finland. This study is a part of the FinEsS study, which is a collaborative study of clinical epidemiology of respiratory health between Finland (Fin), Estonia (Es), and Sweden (S). Asthma and chronic obstructive pulmonary disease (COPD) constitute the two major obstructive airways diseases. The prevalence of asthma has increased, with around 6% of the population in Helsinki reporting physician-diagnosed asthma. The main cause of COPD is smoking with changes in smoking habits in the population affecting its prevalence with a delay. Whereas airway obstruction in asthma is by definition reversible, COPD is characterized by fixed obstruction. Cough and sputum production, the first symptoms of COPD, are often misinterpreted for smokers cough and not recognized as first signs of a chronic illness. Therefore COPD is widely underdiagnosed. More extensive use of spirometry in primary care is advocated to focus smoking cessation interventions on populations at risk. The use of forced expiratory volume in six seconds (FEV6) instead of forced vital capacity (FVC) has been suggested to enable office spirometry to be used in earlier detection of airflow limitation. Despite being a widely accepted standard method of assessment of lung function, the methodology and interpretation of spirometry are constantly developing. In 2005, the ATS/ERS Task Force issued a joint statement which endorsed the 12% and 200 ml thresholds for significant change in forced expiratory volume in one second (FEV1) or FVC during bronchodilation testing, but included the notion that in cases where only FVC improves it should be verified that this is not caused by a longer exhalation time in post-bronchodilator spirometry. This elicited new interest in the assessment of forced expiratory time (FET), a spirometric variable not usually reported or used in assessment. In this population sample, we examined FET and found it to be on average 10.7 (SD 4.3) s and to increase with ageing and airflow limitation in spirometry. The intrasession repeatability of FET was the poorest of the spirometric variables assessed. Based on the intrasession repeatability, a limit for significant change of 3 s was suggested for FET during bronchodilation testing. FEV6 was found to perform equally well as FVC in the population and in a subgroup of subjects with airways obstruction. In the bronchodilation test, decreases were frequently observed in FEV1 and particularly in FVC. The limit of significant increase based on the 95th percentile of the population sample was 9% for FEV1 and 6% for FEV6 and FVC; these are slightly lower than the current limits for single bronchodilation tests (ATS/ERS guidelines). FEV6 was proven as a valid alternative to FVC also in the bronchodilation test and would remove the need to control duration of exhalation during the spirometric bronchodilation test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A reduced 3D continuum model of dynamic piezoelectricity in a thin-film surface-bonded to the substrate/host is presented in this article. While employing large area flexible thin piezoelectric films for novel applications in device/diagnostics, the feasibility of the proposed model in sensing the surface and/or sub-surface defects is demonstrated through simulations - which involve metallic beams with cracks and composite beam with delaminations of various sizes. We have introduced a set of electrical measures to capture the severity of the damage in the existing structures. Characteristics of these electrical measures in terms of the potential difference and its spatial gradients are illustrated in the time domain. Sensitivity studies of the proposed measures in terms of the defected areas and their region of occurence relative to the sensing film are reported. The simulations' results for electrical measures for damaged hosts/substrates are compared with those due to undamaged hosts/substrates, which show monotonicity with high degree of sensitivity to variations in the damage parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study diagonal estimates for the Bergman kernels of certain model domains in C-2 near boundary points that are of infinite type. To do so, we need a mild structural condition on the defining functions of interest that facilitates optimal upper and lower bounds. This is a mild condition; unlike earlier studies of this sort, we are able to make estimates for non-convex pseudoconvex domains as well. Thisn condition quantifies, in some sense, how flat a domain is at an infinite-type boundary point. In this scheme of quantification, the model domains considered below range-roughly speaking-from being mildly infinite-type'' to very flat at the infinite-type points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information professionals and information organisations use Twitter in a variety of ways. Typically both organisations and the individuals that work for them have separate identities on Twitter, but often individuals identify their organization through their profile or Twitter content. This paper considers the way information professionals use Twitter and their practices with regard to privacy, personal disclosure and identifying their organisational affiliations. Drawing on data from a research study involving a questionnaire and social media observation, the paper will provoke discussion about information professionals’ use of Twitter, personal and organizational identity, and the value of Twitter for professional development. In keeping with the subject matter, a curated set of social media content will be available in lieu of a formal paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE To study the utility of fractional calculus in modeling gradient-recalled echo MRI signal decay in the normal human brain. METHODS We solved analytically the extended time-fractional Bloch equations resulting in five model parameters, namely, the amplitude, relaxation rate, order of the time-fractional derivative, frequency shift, and constant offset. Voxel-level temporal fitting of the MRI signal was performed using the classical monoexponential model, a previously developed anomalous relaxation model, and using our extended time-fractional relaxation model. Nine brain regions segmented from multiple echo gradient-recalled echo 7 Tesla MRI data acquired from five participants were then used to investigate the characteristics of the extended time-fractional model parameters. RESULTS We found that the extended time-fractional model is able to fit the experimental data with smaller mean squared error than the classical monoexponential relaxation model and the anomalous relaxation model, which do not account for frequency shift. CONCLUSIONS We were able to fit multiple echo time MRI data with high accuracy using the developed model. Parameters of the model likely capture information on microstructural and susceptibility-induced changes in the human brain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using asymptotics, the coupled wavenumbers in an infinite fluid-filled flexible cylindrical shell vibrating in the beam mode (viz. circumferential wave order n = 1) are studied. Initially, the uncoupled wavenumbers of the acoustic fluid and the cylindrical shell structure are discussed. Simple closed form expressions for the structural wavenumbers (longitudinal, torsional and bending) are derived using asymptotic methods for low- and high-frequencies. It is found that at low frequencies the cylinder in the beam mode behaves like a Timoshenko beam. Next, the coupled dispersion equation of the system is rewritten in the form of the uncoupled dispersion equation of the structure and the acoustic fluid, with an added fluid-loading term involving a parameter mu due to the coupling. An asymptotic expansion involving mu is substituted in this equation. Analytical expressions are derived for the coupled wavenumbers (as modifications to the uncoupled wavenumbers) separately for low- and high-frequency ranges and further, within each frequency range, for large and small values of mu. Only the flexural wavenumber, the first rigid duct acoustic cut-on wavenumber and the first pressure-release acoustic cut-on wavenumber are considered. The general trend found is that for small mu, the coupled wavenumbers are close to the in vacuo structural wavenumber and the wavenumbers of the rigid-acoustic duct. With increasing mu, the perturbations increase, until the coupled wavenumbers are better identified as perturbations to the pressure-release wavenumbers. The systematic derivation for the separate cases of small and large mu gives more insight into the physics and helps to continuously track the wavenumber solutions as the fluid-loading parameter is varied from small to large values. Also, it is found that at any frequency where two wavenumbers intersect in the uncoupled analysis, there is no more an intersection in the coupled case, but a gap is created at that frequency. This method of asymptotics is simple to implement using a symbolic computation package (like Maple). (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The coupled wavenumbers of a fluid-filled flexible cylindrical shell vibrating in the axisymmetric mode are studied. The coupled dispersion equation of the system is rewritten in the form of the uncoupled dispersion equation of the structure and the acoustic fluid, with an added fluid-loading term involving a parameter e due to the coupling. Using the smallness of Poisson's ratio (v), a double-asymptotic expansion involving e and v 2 is substituted in this equation. Analytical expressions are derived for the coupled wavenumbers (for large and small values of E). Different asymptotic expansions are used for different frequency ranges with continuous transitions occurring between them. The wavenumber solutions are continuously tracked as e varies from small to large values. A general trend observed is that a given wavenumber branch transits from a rigidwalled solution to a pressure-release solution with increasing E. Also, it is found that at any frequency where two wavenumbers intersect in the uncoupled analysis, there is no more an intersection in the coupled case, but a gap is created at that frequency. Only the axisymmetric mode is considered. However, the method can be extended to the higher order modes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Extended Clifford algebras" are introduced as a means to obtain low ML decoding complexity space-time block codes. Using left regular matrix representations of two specific classes of extended Clifford algebras, two systematic algebraic constructions of full diversity Distributed Space-Time Codes (DSTCs) are provided for any power of two number of relays. The left regular matrix representation has been shown to naturally result in space-time codes meeting the additional constraints required for DSTCs. The DSTCs so constructed have the salient feature of reduced Maximum Likelihood (ML) decoding complexity. In particular, the ML decoding of these codes can be performed by applying the lattice decoder algorithm on a lattice of four times lesser dimension than what is required in general. Moreover these codes have a uniform distribution of power among the relays and in time, thus leading to a low Peak to Average Power Ratio at the relays.