587 resultados para Multi-path mitigation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This 600+ page online education program provides free access to a comprehensive education and training package that brings together the knowledge of how countries, specifically Australia, can achieve at least 60 percent cuts to greenhouse gas emissions by 2050. This resource has been developed in line with the activities of the CSIRO Energy Transformed Flagship research program which is focused on research that will assist Australia to achieve this target. This training package provides industry, governments, business and households with the knowledge they need to realise at least 30 percent energy efficiency savings in the short term while providing a strong basis for further improvement. It also provides an updated overview of advances in low carbon technologies, renewable energy and sustainable transport to help achieve a sustainable energy future. Whist this education and training package has an Australian focus, it outlines sustainable energy strategies and provide links to numerous online reports which will assist climate change mitigation efforts globally. This training program seeks to compliment other initiatives seeking to encourage the reduction of greenhouse gas emissions through behaviour change, sustainable consumption, and constructive changes in economic incentives and policy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Particle swarm optimization (PSO), a new population based algorithm, has recently been used on multi-robot systems. Although this algorithm is applied to solve many optimization problems as well as multi-robot systems, it has some drawbacks when it is applied on multi-robot search systems to find a target in a search space containing big static obstacles. One of these defects is premature convergence. This means that one of the properties of basic PSO is that when particles are spread in a search space, as time increases they tend to converge in a small area. This shortcoming is also evident on a multi-robot search system, particularly when there are big static obstacles in the search space that prevent the robots from finding the target easily; therefore, as time increases, based on this property they converge to a small area that may not contain the target and become entrapped in that area.Another shortcoming is that basic PSO cannot guarantee the global convergence of the algorithm. In other words, initially particles explore different areas, but in some cases they are not good at exploiting promising areas, which will increase the search time.This study proposes a method based on the particle swarm optimization (PSO) technique on a multi-robot system to find a target in a search space containing big static obstacles. This method is not only able to overcome the premature convergence problem but also establishes an efficient balance between exploration and exploitation and guarantees global convergence, reducing the search time by combining with a local search method, such as A-star.To validate the effectiveness and usefulness of algorithms,a simulation environment has been developed for conducting simulation-based experiments in different scenarios and for reporting experimental results. These experimental results have demonstrated that the proposed method is able to overcome the premature convergence problem and guarantee global convergence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite substantial progress in measuring the 3D profile of anatomical variations in the human brain, their genetic and environmental causes remain enigmatic. We developed an automated system to identify and map genetic and environmental effects on brain structure in large brain MRI databases . We applied our multi-template segmentation approach ("Multi-Atlas Fluid Image Alignment") to fluidly propagate hand-labeled parameterized surface meshes into 116 scans of twins (60 identical, 56 fraternal), labeling the lateral ventricles. Mesh surfaces were averaged within subjects to minimize segmentation error. We fitted quantitative genetic models at each of 30,000 surface points to measure the proportion of shape variance attributable to (1) genetic differences among subjects, (2) environmental influences unique to each individual, and (3) shared environmental effects. Surface-based statistical maps revealed 3D heritability patterns, and their significance, with and without adjustments for global brain scale. These maps visualized detailed profiles of environmental versus genetic influences on the brain, extending genetic models to spatially detailed, automatically computed, 3D maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite substantial progress in measuring the anatomical and functional variability of the human brain, little is known about the genetic and environmental causes of these variations. Here we developed an automated system to visualize genetic and environmental effects on brain structure in large brain MRI databases. We applied our multi-template segmentation approach termed "Multi-Atlas Fluid Image Alignment" to fluidly propagate hand-labeled parameterized surface meshes, labeling the lateral ventricles, in 3D volumetric MRI scans of 76 identical (monozygotic, MZ) twins (38 pairs; mean age = 24.6 (SD = 1.7)); and 56 same-sex fraternal (dizygotic, DZ) twins (28 pairs; mean age = 23.0 (SD = 1.8)), scanned as part of a 5-year research study that will eventually study over 1000 subjects. Mesh surfaces were averaged within subjects to minimize segmentation error. We fitted quantitative genetic models at each of 30,000 surface points to measure the proportion of shape variance attributable to (1) genetic differences among subjects, (2) environmental influences unique to each individual, and (3) shared environmental effects. Surface-based statistical maps, derived from path analysis, revealed patterns of heritability, and their significance, in 3D. Path coefficients for the 'ACE' model that best fitted the data indicated significant contributions from genetic factors (A = 7.3%), common environment (C = 38.9%) and unique environment (E = 53.8%) to lateral ventricular volume. Earlier-maturing occipital horn regions may also be more genetically influenced than later-maturing frontal regions. Maps visualized spatially-varying profiles of environmental versus genetic influences. The approach shows promise for automatically measuring gene-environment effects in large image databases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We developed and validated a new method to create automated 3D parametric surface models of the lateral ventricles in brain MRI scans, providing an efficient approach to monitor degenerative disease in clinical studies and drug trials. First, we used a set of parameterized surfaces to represent the ventricles in four subjects' manually labeled brain MRI scans (atlases). We fluidly registered each atlas and mesh model to MRIs from 17 Alzheimer's disease (AD) patients and 13 age- and gender-matched healthy elderly control subjects, and 18 asymptomatic ApoE4-carriers and 18 age- and gender-matched non-carriers. We examined genotyped healthy subjects with the goal of detecting subtle effects of a gene that confers heightened risk for Alzheimer's disease. We averaged the meshes extracted for each 3D MR data set, and combined the automated segmentations with a radial mapping approach to localize ventricular shape differences in patients. Validation experiments comparing automated and expert manual segmentations showed that (1) the Hausdorff labeling error rapidly decreased, and (2) the power to detect disease- and gene-related alterations improved, as the number of atlases, N, was increased from 1 to 9. In surface-based statistical maps, we detected more widespread and intense anatomical deficits as we increased the number of atlases. We formulated a statistical stopping criterion to determine the optimal number of atlases to use. Healthy ApoE4-carriers and those with AD showed local ventricular abnormalities. This high-throughput method for morphometric studies further motivates the combination of genetic and neuroimaging strategies in predicting AD progression and treatment response. © 2007 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Meta-analyses estimate a statistical effect size for a test or an analysis by combining results from multiple studies without necessarily having access to each individual study's raw data. Multi-site meta-analysis is crucial for imaging genetics, as single sites rarely have a sample size large enough to pick up effects of single genetic variants associated with brain measures. However, if raw data can be shared, combining data in a "mega-analysis" is thought to improve power and precision in estimating global effects. As part of an ENIGMA-DTI investigation, we use fractional anisotropy (FA) maps from 5 studies (total N=2, 203 subjects, aged 9-85) to estimate heritability. We combine the studies through meta-and mega-analyses as well as a mixture of the two - combining some cohorts with mega-analysis and meta-analyzing the results with those of the remaining sites. A combination of mega-and meta-approaches may boost power compared to meta-analysis alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA-DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18-85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain connectivity analyses are increasingly popular for investigating organization. Many connectivity measures including path lengths are generally defined as the number of nodes traversed to connect a node in a graph to the others. Despite its name, path length is purely topological, and does not take into account the physical length of the connections. The distance of the trajectory may also be highly relevant, but is typically overlooked in connectivity analyses. Here we combined genotyping, anatomical MRI and HARDI to understand how our genes influence the cortical connections, using whole-brain tractography. We defined a new measure, based on Dijkstra's algorithm, to compute path lengths for tracts connecting pairs of cortical regions. We compiled these measures into matrices where elements represent the physical distance traveled along tracts. We then analyzed a large cohort of healthy twins and show that our path length measure is reliable, heritable, and influenced even in young adults by the Alzheimer's risk gene, CLU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9-85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large "mega-family". We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose The purpose of this paper is to explore the concept of service quality for settings where several customers are involved in the joint creation and consumption of a service. The approach is to provide first insights into the implications of a simultaneous multi‐customer integration on service quality. Design/methodology/approach This conceptual paper undertakes a thorough review of the relevant literature before developing a conceptual model regarding service co‐creation and service quality in customer groups. Findings Group service encounters must be set up carefully to account for the dynamics (social activity) in a customer group and skill set and capabilities (task activity) of each of the individual participants involved in a group service experience. Research limitations/implications Future research should undertake empirical studies to validate and/or modify the suggested model presented in this contribution. Practical implications Managers of service firms should be made aware of the implications and the underlying factors of group services in order to create and manage a group experience successfully. Particular attention should be given to those factors that can be influenced by service providers in managing encounters with multiple customers. Originality/value This article introduces a new conceptual approach for service encounters with groups of customers in a proposed service quality model. In particular, the paper focuses on integrating the impact of customers' co‐creation activities on service quality in a multiple‐actor model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We initially look at the changing energy environment and how that can have a dramatic change on the potential of alternative energies, in particular those of organic photovoltaicvs (OPV) cells. In looking at OPV's we also address the aspects of where we are with the current art and why we may not be getting the best from our materials. In doing so, we propose the idea of changing how we build organic photovoltaics by addressing the best method to contain light within the devices. Our initial effort is in addressing how these microscale optical concentrators work in the form of optical fibers in terms of absorption. We have derived a mathematical method which takes account of the input angle of light to achieve optimum absorption. However, in doing so we also address the complex issue how the changing refractive indices in a multilayer device can alter how we input the light. We have found that by knowing the materials refractive index our model takes into account the incident plane, meridonal plane, cross sectional are and path length to ensure optical angular input. Secondly, we also address the practicalities of making such vertical structures the greater issue of changing light intensity incident on a solar cell and how that aspects alters how we view the performance of organic solar cells.