988 resultados para Multi-modality
Resumo:
Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9-85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large "mega-family". We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability.
Resumo:
Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.
Resumo:
High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.
Resumo:
Speech recognition can be improved by using visual information in the form of lip movements of the speaker in addition to audio information. To date, state-of-the-art techniques for audio-visual speech recognition continue to use audio and visual data of the same database for training their models. In this paper, we present a new approach to make use of one modality of an external dataset in addition to a given audio-visual dataset. By so doing, it is possible to create more powerful models from other extensive audio-only databases and adapt them on our comparatively smaller multi-stream databases. Results show that the presented approach outperforms the widely adopted synchronous hidden Markov models (HMM) trained jointly on audio and visual data of a given audio-visual database for phone recognition by 29% relative. It also outperforms the external audio models trained on extensive external audio datasets and also internal audio models by 5.5% and 46% relative respectively. We also show that the proposed approach is beneficial in noisy environments where the audio source is affected by the environmental noise.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Purpose The purpose of this paper is to explore the concept of service quality for settings where several customers are involved in the joint creation and consumption of a service. The approach is to provide first insights into the implications of a simultaneous multi‐customer integration on service quality. Design/methodology/approach This conceptual paper undertakes a thorough review of the relevant literature before developing a conceptual model regarding service co‐creation and service quality in customer groups. Findings Group service encounters must be set up carefully to account for the dynamics (social activity) in a customer group and skill set and capabilities (task activity) of each of the individual participants involved in a group service experience. Research limitations/implications Future research should undertake empirical studies to validate and/or modify the suggested model presented in this contribution. Practical implications Managers of service firms should be made aware of the implications and the underlying factors of group services in order to create and manage a group experience successfully. Particular attention should be given to those factors that can be influenced by service providers in managing encounters with multiple customers. Originality/value This article introduces a new conceptual approach for service encounters with groups of customers in a proposed service quality model. In particular, the paper focuses on integrating the impact of customers' co‐creation activities on service quality in a multiple‐actor model.
Resumo:
Oleaginous microorganisms have potential to be used to produce oils as alternative feedstock for biodiesel production. Microalgae (Chlorella protothecoides and Chlorella zofingiensis), yeasts (Cryptococcus albidus and Rhodotorula mucilaginosa), and fungi (Aspergillus oryzae and Mucor plumbeus) were investigated for their ability to produce oil from glucose, xylose and glycerol. Multi-criteria analysis (MCA) using analytic hierarchy process (AHP) and preference ranking organization method for the enrichment of evaluations (PROMETHEE) with graphical analysis for interactive aid (GAIA), was used to rank and select the preferred microorganisms for oil production for biodiesel application. This was based on a number of criteria viz., oil concentration, content, production rate and yield, substrate consumption rate, fatty acids composition, biomass harvesting and nutrient costs. PROMETHEE selected A. oryzae, M. plumbeus and R. mucilaginosa as the most prospective species for oil production. However, further analysis by GAIA Webs identified A. oryzae and M. plumbeus as the best performing microorganisms.
Resumo:
This research investigated the use of DNA fingerprinting to characterise the bacteria Streptococcus pneumoniae or pneumococcus, and hence gain insight into the development of new vaccines or antibiotics. Different bacterial DNA fingerprinting methods were studied, and a novel method was developed and validated, which characterises different cell coatings that pneumococci produce. This method was used to study the epidemiology of pneumococci in Queensland before and after the introduction of the current pneumococcal vaccine. This study demonstrated that pneumococcal disease is highly prevalent in children under four years, that the bacteria can `switch' its cell coating to evade the vaccine, and that some DNA fingerprinting methods are more discriminatory than others. This has an impact on understanding which strains are more prone to cause invasive disease. Evidence of the excellent research findings have been published in high impact internationally refereed journals.
Resumo:
An innovative cement-based soft-hard-soft (SHS) multi-layer composite has been developed for protective infrastructures. Such composite consists of three layers including asphalt concrete (AC), high strength concrete (HSC), and engineered cementitious composites (ECC). A three dimensional benchmark numerical model for this SHS composite as pavement under blast load was established using LSDYNA and validated by field blast test. Parametric studies were carried out to investigate the influence of a few key parameters including thickness and strength of HSC and ECC layers, interface properties, soil conditions on the blast resistance of the composite. The outcomes of this study also enabled the establishment of a damage pattern chart for protective pavement design and rapid repair after blast load. Efficient methods to further improve the blast resistance of the SHS multi-layer pavement system were also recommended.
Resumo:
Our aim is to examine evidence-based strategies to motivate appropriate action and increase informed decision-making during the response and recovery phases of disasters. We combine expertise in communication, consumer psychology and marketing, disaster and emergency management, and law. This poster presents findings from a social media work package, and preliminary findings from the focus group work package on emergency warning message comprehension.
Resumo:
This paper proposes a new multi-resource multi-stage mine production timetabling problem for optimising the open-pit drilling, blasting and excavating operations under equipment capacity constraints. The flow process is analysed based on the real-life data from an Australian iron ore mine site. The objective of the model is to maximise the throughput and minimise the total idle times of equipment at each stage. The following comprehensive mining attributes and constraints are considered: types of equipment; operating capacities of equipment; ready times of equipment; speeds of equipment; block-sequence-dependent movement times; equipment-assignment-dependent operational times; etc. The model also provides the availability and usage of equipment units at multiple operational stages such as drilling, blasting and excavating stages. The problem is formulated by mixed integer programming and solved by ILOG-CPLEX optimiser. The proposed model is validated with extensive computational experiments to improve mine production efficiency at the operational level.
Resumo:
We address the problem of the rangefinder-based avoidance of unforeseen static obstacles during a visual navigation task. We extend previous strategies which are efficient in most cases but remain still hampered by some drawbacks (e.g., risks of collisions or of local minima in some particular cases, etc.). The key idea is to complete the control strategy by adding a controller providing the robot some anticipative skills to guarantee non collision and by defining more general transition conditions to deal with local minima. Simulation results show the proposed strategy efficiency.
Resumo:
Chlamydia pecorum is globally associated with several ovine diseases including keratoconjunctivitis and polyarthritis. The exact relationship between the variety of C. pecorum strains reported and the diseases described in sheep remains unclear, challenging efforts to accurately diagnose and manage infected flocks. In the present study, we applied C. pecorum multi-locus sequence typing (MLST) to C. pecorum positive samples collected from sympatric flocks of Australian sheep presenting with conjunctivitis, conjunctivitis with polyarthritis, or polyarthritis only and with no clinical disease (NCD) in order to elucidate the exact relationships between the infecting strains and the range of diseases. Using Bayesian phylogenetic and cluster analyses on 62 C. pecorum positive ocular, vaginal and rectal swab samples from sheep presenting with a range of diseases and in a comparison to C. pecorum sequence types (STs) from other hosts, one ST (ST 23) was recognised as a globally distributed strain associated with ovine and bovine diseases such as polyarthritis and encephalomyelitis. A second ST (ST 69) presently only described in Australian animals, was detected in association with ovine as well as koala chlamydial infections. The majority of vaginal and rectal C. pecorum STs from animals with NCD and/or anatomical sites with no clinical signs of disease in diseased animals, clustered together in a separate group, by both analyses. Furthermore, 8/13 detected STs were novel. This study provides a platform for strain selection for further research into the pathogenic potential of C. pecorum in animals and highlights targets for potential strain-specific diagnostic test development.
Resumo:
Objective To develop a child victimization survey among a diverse group of child protection experts and examine the performance of the instrument through a set of international pilot studies. Methods The initial draft of the instrument was developed after input from scientists and practitioners representing 40 countries. Volunteers from the larger group of scientists participating in the Delphi review of the ICAST P and R reviewed the ICAST C by email in 2 rounds resulting in a final instrument. The ICAST C was then translated and back translated into six languages and field tested in four countries using a convenience sample of 571 children 12–17 years of age selected from schools and classrooms to which the investigators had easy access. Results The final ICAST C Home has 38 items and the ICAST C Institution has 44 items. These items serve as screeners and positive endorsements are followed by queries for frequency and perpetrator. Half of respondents were boys (49%). Endorsement for various forms of victimization ranged from 0 to 51%. Many children report violence exposure (51%), physical victimization (55%), psychological victimization (66%), sexual victimization (18%), and neglect in their homes (37%) in the last year. High rates of physical victimization (57%), psychological victimization (59%), and sexual victimization (22%) were also reported in schools in the last year. Internal consistency was moderate to high (alpha between .685 and .855) and missing data low (less than 1.5% for all but one item). Conclusions In pilot testing, the ICAST C identifies high rates of child victimization in all domains. Rates of missing data are low, and internal consistency is moderate to high. Pilot testing demonstrated the feasibility of using child self-report as one strategy to assess child victimization. Practice implications The ICAST C is a multi-national, multi-lingual, consensus-based survey instrument. It is available in six languages for international research to estimate child victimization. Assessing the prevalence of child victimization is critical in understanding the scope of the problem, setting national and local priorities, and garnering support for program and policy development aimed at child protection.
Resumo:
We study the impact of progress feedback on players' performance in multi-contest team tournaments, in which team members' efforts are not directly substitutable. In particular, we employ a real-effort laboratory experiment to understand, in a best-of-three tournament, how players' strategic mindsets change when they compete on a team compared to when they compete individually. Our data corroborate the theoretical predictions for teams: Neither a lead nor a lag in the first component contest affects a team's performance in the subsequent contests. In individual tournaments, however, contrary to the theoretical prediction, we observe that leaders perform worse—but laggards perform better—after learning the outcome of the first contest. Our findings offer the first empirical evidence from a controlled laboratory of the impact of progress feedback between team and individual tournaments, and contribute new insights on team incentives.