920 resultados para higher order ambisonics
Resumo:
We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or $J$-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the $J$-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data.
Resumo:
The 'rich club' coefficient describes a phenomenon where a network's hubs (high-degree nodes) are on average more intensely interconnected than lower-degree nodes. Networks with rich clubs often have an efficient, higher-order organization, but we do not yet know how the rich club emerges in the living brain, or how it changes as our brain networks develop. Here we chart the developmental trajectory of the rich club in anatomical brain networks from 438 subjects aged 12-30. Cortical networks were constructed from 68×68 connectivity matrices of fiber density, using whole-brain tractography in 4-Tesla 105-gradient high angular resolution diffusion images (HARDI). The adult and younger cohorts had rich clubs that included different nodes; the rich club effect intensified with age. Rich-club organization is a sign of a network's efficiency and robustness. These concepts and findings may be advantageous for studying brain maturation and abnormal brain development.
Resumo:
Population-based brain mapping provides great insight into the trajectory of aging and dementia, as well as brain changes that normally occur over the human life span.We describe three novel brain mapping techniques, cortical thickness mapping, tensor-based morphometry (TBM), and hippocampal surface modeling, which offer enormous power for measuring disease progression in drug trials, and shed light on the neuroscience of brain degeneration in Alzheimer's disease (AD) and mild cognitive impairment (MCI).We report the first time-lapse maps of cortical atrophy spreading dynamically in the living brain, based on averaging data from populations of subjects with Alzheimer's disease and normal subjects imaged longitudinally with MRI. These dynamic sequences show a rapidly advancing wave of cortical atrophy sweeping from limbic and temporal cortices into higher-order association and ultimately primary sensorimotor areas, in a pattern that correlates with cognitive decline. A complementary technique, TBM, reveals the 3D profile of atrophic rates, at each point in the brain. A third technique, hippocampal surface modeling, plots the profile of shape alterations across the hippocampal surface. The three techniques provide moderate to highly automated analyses of images, have been validated on hundreds of scans, and are sensitive to clinically relevant changes in individual patients and groups undergoing different drug treatments. We compare time-lapse maps of AD, MCI, and other dementias, correlate these changes with cognition, and relate them to similar time-lapse maps of childhood development, schizophrenia, and HIV-associated brain degeneration. Strengths and weaknesses of these different imaging measures for basic neuroscience and drug trials are discussed.
Resumo:
The construction industry accounts for a significant portion of the material consumption of our industrialised societies. That material consumption comes at an environmental cost, and when buildings and infrastructure projects are demolished and discarded, after their useful lifespan, that environmental cost remains largely unrecovered. The expected operational lifespan of modern buildings has become disturbingly short as buildings are replaced for reasons of changing cultural expectations, style, serviceability, locational obsolescence and economic viability. The same buildings however are not always physically or structurally obsolete; the materials and components within them are very often still completely serviceable. While there is some activity in the area of recycling of selected construction materials, such as steel and concrete, this is almost always in the form of down cycling or reprocessing. Very little of this material and component resource is reuse in a way that more effectively captures its potential. One significant impediment to such reuse is that buildings are not designed in a way that facilitates easy recovery of materials and components; they are designed and built for speed of construction and quick economic returns, with little or no consideration of the longer term consequences of their physical matter. This research project explores the potential for the recovery of materials and components if buildings were designed for such future recovery; a strategy of design for disassembly. This is not a new design philosophy; design for disassembly is well understood in product design and industrial design. There are also some architectural examples of design for disassembly; however these are specialist examples and there is no significant attempt to implement the strategy in the main stream construction industry. This paper presents research into the analysis of the embodied energy in buildings, highlighting its significance in comparison with operational energy. Analysis at material, component, and whole-of-building levels shows the potential benefits of strategically designing buildings for future disassembly to recover this embodied energy. Careful consideration at the early design stage can result in the deconstruction of significant portions of buildings and the recovery of their potential through higher order reuse and upcycling.
Resumo:
Trypsin-treated rat brain myelin was subjected to biochemical and X-ray studies. Untreated myelin gave rise to a pattern of three rings with a fundamental repeat period of 155 Angstrom consisting of two bilayers per repeat period, whereas myelin treated with trypsin showed a fundamental repeat period of 75 Angstrom with one bilayer per repeat period. The integrated raw intensity of the h=4 reflection with respect to the h=2 reflection is 0.38 for untreated myelin. The corresponding value reduced to 0.23, 0.18, 0.17 for myelin treated with 5, 10, 40 units of trypsin per mg of myelin, respectively, for 30 min at 30 degrees C. The decrease in relative raw intensity of the higher-order reflection relative to the lower-order reflection is suggestive of a disordering of the phosphate groups upon trypsin treatment or an increased mosaicity of the membrane or a combination of both these effects, However, trypsin treatment does not lead to a complete breakdown of the membrane, The integrated intensity of the h=1 reflection, though weak, is above the measurable threshold for untreated myelin, whereas the corresponding intensity is below the measurable threshold for trypsin-treated myelin, indicating a possible asymmetric to symmetric transition of the myelin bilayer structure about its centre after trypsin treatment.
Resumo:
A sizeable body of research has investigated the impact of specific character strengths or traits on significant outcomes. Some recent research is beginning to consider the effects of groups of strengths, combined as a higher order variable, and termed covitality. This study investigated the combined influence of four positive character traits: gratitude, optimism, zest and persistence, upon school engagement, within a sample of 112 Australian primary school students. The combined effect of these four traits, in defining covitality as a higher or second order factor within a path analysis, was found to predict relatively higher levels of school engagement and pro-social behaviour.
Resumo:
The effect of correlations on the viscosity of a dilute sheared inelastic fluid is analyzed using the ring-kinetic equation for the two-particle correlation function. The leading-order contribution to the stress in an expansion in epsilon=(1-e)(1/2) is calculated, and it is shown that the leading-order viscosity is identical to that obtained from the Green-Kubo formula, provided the stress autocorrelation function in a sheared steady state is used in the Green-Kubo formula. A systemmatic extension of this to higher orders is also formulated, and the higher-order contributions to the stress from the ring-kinetic equation are determined in terms of the terms in the Chapman-Enskog solution for the Boltzmann equation. The series is resummed analytically to obtain a renormalized stress equation. The most dominant contributions to the two-particle correlation function are products of the eigenvectors of the conserved hydrodynamic modes of the two correlated particles. In Part I, it was shown that the long-time tails of the velocity autocorrelation function are not present in a sheared fluid. Using those results, we show that correlations do not cause a divergence in the transport coefficients; the viscosity is not divergent in two dimensions, and the Burnett coefficients are not divergent in three dimensions. The equations for three-particle and higher correlations are analyzed diagrammatically. It is found that the contributions due to the three-particle and higher correlation functions to the renormalized viscosity are smaller than those due to the two-particle distribution function in the limit epsilon -> 0. This implies that the most dominant correlation effects are due to the two-particle correlations.
Resumo:
Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.
Resumo:
Lasers are very efficient in heating localized regions and hence they find a wide application in surface treatment processes. The surface of a material can be selectively modified to give superior wear and corrosion resistance. In laser surface-melting and welding problems, the high temperature gradient prevailing in the free surface induces a surface-tension gradient which is the dominant driving force for convection (known as thermo-capillary or Marangoni convection). It has been reported that the surface-tension driven convection plays a dominant role in determining the melt pool shape. In most of the earlier works on laser-melting and related problems, the finite difference method (FDM) has been used to solve the Navier Stokes equations [1]. Since the Reynolds number is quite high in these cases, upwinding has been used. Though upwinding gives physically realistic solutions even on a coarse grid, the results are inaccurate. McLay and Carey have solved the thermo-capillary flow in welding problems by an implicit finite element method [2]. They used the conventional Galerkin finite element method (FEM) which requires that the pressure be interpolated by one order lower than velocity (mixed interpolation). This restricts the choice of elements to certain higher order elements which need numerical integration for evaluation of element matrices. The implicit algorithm yields a system of nonlinear, unsymmetric equations which are not positive definite. Computations would be possible only with large mainframe computers.Sluzalec [3] has modeled the pulsed laser-melting problem by an explicit method (FEM). He has used the six-node triangular element with mixed interpolation. Since he has considered the buoyancy induced flow only, the velocity values are small. In the present work, an equal order explicit FEM is used to compute the thermo-capillary flow in the laser surface-melting problem. As this method permits equal order interpolation, there is no restriction in the choice of elements. Even linear elements such as the three-node triangular elements can be used. As the governing equations are solved in a sequential manner, the computer memory requirement is less. The finite element formulation is discussed in this paper along with typical numerical results.
Resumo:
Time-frequency analysis of various simulated and experimental signals due to elastic wave scattering from damage are performed using wavelet transform (WT) and Hilbert-Huang transform (HHT) and their performances are compared in context of quantifying the damages. Spectral finite element method is employed for numerical simulation of wave scattering. An analytical study is carried out to study the effects of higher-order damage parameters on the reflected wave from a damage. Based on this study, error bounds are computed for the signals in the spectral and also on the time-frequency domains. It is shown how such an error bound can provide all estimate of error in the modelling of wave propagation in structure with damage. Measures of damage based on WT and HHT is derived to quantify the damage information hidden in the signal. The aim of this study is to obtain detailed insights into the problem of (1) identifying localised damages (2) dispersion of multifrequency non-stationary signals after they interact with various types of damage and (3) quantifying the damages. Sensitivity analysis of the signal due to scattered wave based on time-frequency representation helps to correlate the variation of damage index measures with respect to the damage parameters like damage size and material degradation factors.
Resumo:
KIRCHHOFF’S theory [1] and the first-order shear deformation theory (FSDT) [2] of plates in bending are simple theories and continuously used to obtain design information. Within the classical small deformation theory of elasticity, the problem consists of determining three displacements, u, v, and w, that satisfy three equilibrium equations in the interior of the plate and three specified surface conditions. FSDT is a sixth-order theory with a provision to satisfy three edge conditions and maintains, unlike in Kirchhoff’s theory, independent linear thicknesswise distribution of tangential displacement even if the lateral deflection, w, is zero along a supported edge. However, each of the in-plane distributions of the transverse shear stresses that are of a lower order is expressed as a sum of higher-order displacement terms. Kirchhoff’s assumption of zero transverse shear strains is, however, not a limitation of the theory as a first approximation to the exact 3-D solution.
Resumo:
High-pressure magnetic susceptibility measurements have been carried out on Fe(dipy)2(NCS)2 and Fe(phen)2(NCS)2 in the pressure range 1–10 kbar and tempeature range 80–300 K in order to investigate the factors responsible for the spin-state transitions. The transitions change from first order to second or higher order upon application of pressure. The temperature variation of the susceptibility at different pressures has been analysed quantitatively within the framework of available models. It is shown that the relative magnitudes of the ΔG0 of high-spin and low-spin conversion and the ferromagnetic interaction between high-spin complexes determines the nature of the transition.
Resumo:
Timoshenko's shear deformation theory is widely used for the dynamical analysis of shear-flexible beams. This paper presents a comparative study of the shear deformation theory with a higher order model, of which Timoshenko's shear deformation model is a special case. Results indicate that while Timoshenko's shear deformation theory gives reasonably accurate information regarding the set of bending natural frequencies, there are considerable discrepancies in the information it gives regarding the mode shapes and dynamic response, and so there is a need to consider higher order models for the dynamical analysis of flexure of beams.
Resumo:
In this paper, we have probed the origin of SHG in copper nanoparticles by polarization-resolved hyper-Rayleigh scattering (HRS). Results obtained with various sizes of copper nanoparticles at four different wavelengths covering the wavelength range 738-1907 nm reveal that the origin of second harmonic generation (SHG) in these particles is purely dipolar in nature as long as the size (d) of the particles remains smaller compared to the wavelength (;.) of light ("small-particle limit"). However, contribution of the higher order multipoles coupled with retardation effect becomes apparent with an increase in the d/lambda ratio. We have identified the "small-particle limit" in the second harmonic generation from noble metal nanoparticles by evaluating the critical d/lambda ratio at which the retardation effect sets in the noble metal nanoparticles. We have found that the second-order nonlinear optical property of copper nanoparticles closely resembles that of gold, but not that of silver. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Perceiving students, science students especially, as mere consumers of facts and information belies the importance of a need to engage them with the principles underlying those facts and is counter-intuitive to the facilitation of knowledge and understanding. Traditional didactic lecture approaches need a re-think if student classroom engagement and active learning are to be valued over fact memorisation and fact recall. In our undergraduate biomedical science programs across Years 1, 2 and 3 in the Faculty of Health at QUT, we have developed an authentic learning model with an embedded suite of pedagogical strategies that foster classroom engagement and allow for active learning in the sub-discipline area of medical bacteriology. The suite of pedagogical tools we have developed have been designed to enable their translation, with appropriate fine-tuning, to most biomedical and allied health discipline teaching and learning contexts. Indeed, aspects of the pedagogy have been successfully translated to the nursing microbiology study stream at QUT. The aims underpinning the pedagogy are for our students to: (1) Connect scientific theory with scientific practice in a more direct and authentic way, (2) Construct factual knowledge and facilitate a deeper understanding, and (3) Develop and refine their higher order flexible thinking and problem solving skills, both semi-independently and independently. The mindset and role of the teaching staff is critical to this approach since for the strategy to be successful tertiary teachers need to abandon traditional instructional modalities based on one-way information delivery. Face-to-face classroom interactions between students and lecturer enable realisation of pedagogical aims (1), (2) and (3). The strategy we have adopted encourages teachers to view themselves more as expert guides in what is very much a student-focused process of scientific exploration and learning. Specific pedagogical strategies embedded in the authentic learning model we have developed include: (i) interactive lecture-tutorial hybrids or lectorials featuring teacher role-plays as well as class-level question-and-answer sessions, (ii) inclusion of “dry” laboratory activities during lectorials to prepare students for the wet laboratory to follow, (iii) real-world problem-solving exercises conducted during both lectorials and wet laboratory sessions, and (iv) designing class activities and formative assessments that probe a student’s higher order flexible thinking skills. Flexible thinking in this context encompasses analytical, critical, deductive, scientific and professional thinking modes. The strategic approach outlined above is designed to provide multiple opportunities for students to apply principles flexibly according to a given situation or context, to adapt methods of inquiry strategically, to go beyond mechanical application of formulaic approaches, and to as much as possible self-appraise their own thinking and problem solving. The pedagogical tools have been developed within both workplace (real world) and theoretical frameworks. The philosophical core of the pedagogy is a coherent pathway of teaching and learning which we, and many of our students, believe is more conducive to student engagement and active learning in the classroom. Qualitative and quantitative data derived from online and hardcopy evaluations, solicited and unsolicited student and graduate feedback, anecdotal evidence as well as peer review indicate that: (i) our students are engaging with the pedagogy, (ii) a constructivist, authentic-learning approach promotes active learning, and (iii) students are better prepared for workplace transition.