259 resultados para Combine harvester
Resumo:
In the experience economy, the role of art museums has evolved so as to cater to global cultural tourists. These institutions were traditionally dedicated to didactic functions, and served cognoscenti with elite cultural tastes that were aligned with the avant-garde’s autonomous stance towards mass culture. In a post-avant-garde era however museums have focused on appealing to a broad clientele that often has little or no knowledge of historical or contemporary art. Many of these tourists want art to provide entertaining and novel experiences, rather than receiving pedagogical ‘training’. In response, art museums are turning into ‘experience venues’ and are being informed by ideas associated with new museology, as well as business approaches like Customer Experience Management. This has led to the provision of populist entertainment modes, such as blockbuster exhibitions, participatory art events, jazz nights, and wine tasting, and reveals that such museums recognize that today’s cultural tourist is part of an increasingly diverse and populous demographic, which shares many languages and value systems. As art museums have shifted attention to global tourists, they have come to play a greater role in gentrification projects and cultural precincts. The art museum now seems ideally suited to tourist-centric environments that offer a variety of immersive sensory experiences and combine museums (often designed by star-architects), international hotels, restaurants, high-end shopping zones, and other leisure forums. These include sites such as Port Maravilha urban waterfront development in Rio de Janiero, the Museum of Old and New Art in Hobart, and the Chateau La Coste winery and hotel complex in Provence. It can be argued that in a global experience economy, art museums have become experience centres in experience-scapes. This paper will examine the nature of the tourist experience in relation to the new art museum, and the latter’s increasingly important role in attracting tourists to urban and regional cultural precincts.
Resumo:
Objective The aim of this systematic review and meta-analysis was to determine the overall effect of resistance training (RT) on measures of muscular strength in people with Parkinson’s disease (PD). Methods Controlled trials with parallel-group-design were identified from computerized literature searching and citation tracking performed until August 2014. Two reviewers independently screened for eligibility and assessed the quality of the studies using the Cochrane risk-of-bias-tool. For each study, mean differences (MD) or standardized mean differences (SMD) and 95% confidence intervals (CI) were calculated for continuous outcomes based on between-group comparisons using post-intervention data. Subgroup analysis was conducted based on differences in study design. Results Nine studies met the inclusion criteria; all had a moderate to high risk of bias. Pooled data showed that knee extension, knee flexion and leg press strength were significantly greater in PD patients who undertook RT compared to control groups with or without interventions. Subgroups were: RT vs. control-without-intervention, RT vs. control-with-intervention, RT-with-other-form-of-exercise vs. control-without-intervention, RT-with-other-form-of-exercise vs. control-with-intervention. Pooled subgroup analysis showed that RT combined with aerobic/balance/stretching exercise resulted in significantly greater knee extension, knee flexion and leg press strength compared with no-intervention. Compared to treadmill or balance exercise it resulted in greater knee flexion, but not knee extension or leg press strength. RT alone resulted in greater knee extension and flexion strength compared to stretching, but not in greater leg press strength compared to no-intervention. Discussion Overall, the current evidence suggests that exercise interventions that contain RT may be effective in improving muscular strength in people with PD compared with no exercise. However, depending on muscle group and/or training dose, RT may not be superior to other exercise types. Interventions which combine RT with other exercise may be most effective. Findings should be interpreted with caution due to the relatively high risk of bias of most studies.
Resumo:
We defined a new statistical fluid registration method with Lagrangian mechanics. Although several authors have suggested that empirical statistics on brain variation should be incorporated into the registration problem, few algorithms have included this information and instead use regularizers that guarantee diffeomorphic mappings. Here we combine the advantages of a large-deformation fluid matching approach with empirical statistics on population variability in anatomy. We reformulated the Riemannian fluid algorithmdeveloped in [4], and used a Lagrangian framework to incorporate 0 th and 1st order statistics in the regularization process. 92 2D midline corpus callosum traces from a twin MRI database were fluidly registered using the non-statistical version of the algorithm (algorithm 0), giving initial vector fields and deformation tensors. Covariance matrices were computed for both distributions and incorporated either separately (algorithm 1 and algorithm 2) or together (algorithm 3) in the registration. We computed heritability maps and two vector and tensorbased distances to compare the power and the robustness of the algorithms.
Resumo:
Cognitive scientists were not quick to embrace the functional neuroimaging technologies that emerged during the late 20th century. In this new century, cognitive scientists continue to question, not unreasonably, the relevance of functional neuroimaging investigations that fail to address questions of interest to cognitive science. However, some ultra-cognitive scientists assert that these experiments can never be of relevance to the study of cognition. Their reasoning reflects an adherence to a functionalist philosophy that arbitrarily and purposefully distinguishes mental information-processing systems from brain or brain-like operations. This article addresses whether data from properly conducted functional neuroimaging studies can inform and subsequently constrain the assumptions of theoretical cognitive models. The article commences with a focus upon the functionalist philosophy espoused by the ultra-cognitive scientists, contrasting it with the materialist philosophy that motivates both cognitive neuroimaging investigations and connectionist modelling of cognitive systems. Connectionism and cognitive neuroimaging share many features, including an emphasis on unified cognitive and neural models of systems that combine localist and distributed representations. The utility of designing cognitive neuroimaging studies to test (primarily) connectionist models of cognitive phenomena is illustrated using data from functional magnetic resonance imaging (fMRI) investigations of language production and episodic memory.
Resumo:
Meta-analyses estimate a statistical effect size for a test or an analysis by combining results from multiple studies without necessarily having access to each individual study's raw data. Multi-site meta-analysis is crucial for imaging genetics, as single sites rarely have a sample size large enough to pick up effects of single genetic variants associated with brain measures. However, if raw data can be shared, combining data in a "mega-analysis" is thought to improve power and precision in estimating global effects. As part of an ENIGMA-DTI investigation, we use fractional anisotropy (FA) maps from 5 studies (total N=2, 203 subjects, aged 9-85) to estimate heritability. We combine the studies through meta-and mega-analyses as well as a mixture of the two - combining some cohorts with mega-analysis and meta-analyzing the results with those of the remaining sites. A combination of mega-and meta-approaches may boost power compared to meta-analysis alone.
Resumo:
Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9-85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large "mega-family". We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability.
Resumo:
3D registration of brain MRI data is vital for many medical imaging applications. However, purely intensitybased approaches for inter-subject matching of brain structure are generally inaccurate in cortical regions, due to the highly complex network of sulci and gyri, which vary widely across subjects. Here we combine a surfacebased cortical registration with a 3D fluid one for the first time, enabling precise matching of cortical folds, but allowing large deformations in the enclosed brain volume, which guarantee diffeomorphisms. This greatly improves the matching of anatomy in cortical areas. The cortices are segmented and registered with the software Freesurfer. The deformation field is initially extended to the full 3D brain volume using a 3D harmonic mapping that preserves the matching between cortical surfaces. Finally, these deformation fields are used to initialize a 3D Riemannian fluid registration algorithm, that improves the alignment of subcortical brain regions. We validate this method on an MRI dataset from 92 healthy adult twins. Results are compared to those based on volumetric registration without surface constraints; the resulting mean templates resolve consistent anatomical features both subcortically and at the cortex, suggesting that the approach is well-suited for cross-subject integration of functional and anatomic data.
Resumo:
Process variability in pollutant build-up and wash-off generates inherent uncertainty that affects the outcomes of stormwater quality models. Poor characterisation of process variability constrains the accurate accounting of the uncertainty associated with pollutant processes. This acts as a significant limitation to effective decision making in relation to stormwater pollution mitigation. The study undertaken developed three theoretical scenarios based on research findings that variations in particle size fractions <150µm and >150µm during pollutant build-up and wash-off primarily determine the variability associated with these processes. These scenarios, which combine pollutant build-up and wash-off processes that takes place on a continuous timeline, are able to explain process variability under different field conditions. Given the variability characteristics of a specific build-up or wash-off event, the theoretical scenarios help to infer the variability characteristics of the associated pollutant process that follows. Mathematical formulation of the theoretical scenarios enables the incorporation of variability characteristics of pollutant build-up and wash-off processes in stormwater quality models. The research study outcomes will contribute to the quantitative assessment of uncertainty as an integral part of the interpretation of stormwater quality modelling outcomes.
Resumo:
This article uses topological approaches to suggest that education is becoming-topological. Analyses presented in a recent double-issue of Theory, Culture & Society are used to demonstrate the utility of topology for education. In particular, the article explains education's topological character through examining the global convergence of education policy, testing and the discursive ranking of systems, schools and individuals in the promise of reforming education through the proliferation of regimes of testing at local and global levels that constitute a new form of governance through data. In this conceptualisation of global education policy changes in the form and nature of testing combine with it the emergence of global policy network to change the nature of the local (national, regional, school and classroom) forces that operate through the ‘system’. While these forces change, they work through a discursivity that produces disciplinary effects, but in a different way. This new–old disciplinarity, or ‘database effect’, is here represented through a topological approach because of its utility for conceiving education in an increasingly networked world.
Resumo:
In this paper we illustrate a set of features of the Apromore process model repository for analyzing business process variants. Two types of analysis are provided: one is static and based on differences on the process control flow, the other is dynamic and based on differences in the process behavior between the variants. These features combine techniques for the management of large process model collections with those for mining process knowledge from process execution logs. The tool demonstration will be useful for researchers and practitioners working on large process model collections and process execution logs, and specifically for those with an interest in understanding, managing and consolidating business process variants both within and across organizational boundaries.
Resumo:
Flexible multilayer electrodes that combine high transparency, high conductivity, and efficient charge extraction have been deposited, characterised and used as the anode in organic solar cells. The anode consists of an AZO/Ag/AZO stack plus a very thin oxide interlayer whose ionization potential is fine-tuned by manipulating its gap state density to optimise charge transfer with the bulk heterojunction active layer consisting of poly(n-3- hexylthiophene-2,5-diyl) and phenyl-C61-butyric acid methyl ester (P3HT:BC61BM). The deposition method for the stack was compatible with the low temperatures required for polymer substrates. Optimisation of the electrode stack was achieved by modelling the optical and electrical properties of the device and a power conversion efficiency of 2.9% under AM1.5 illumination compared to 3.0% with an ITO-only anode and 3.5% for an ITO:PEDOT electrode. Dark I-V reverse bias characteristics indicate very low densities of occupied buffer states close to the HOMO level of the hole conductor, despite observed ionization potential being high enough. Their elimination should raise efficiency to that with ITO:PEDOT.
Resumo:
Even though crashes between trains and road users are rare events at railway level crossings, they are one of the major safety concerns for the Australian railway industry. Nearmiss events at level crossings occur more frequently, and can provide more information about factors leading to level crossing incidents. In this paper we introduce a video analytic approach for automatically detecting and localizing vehicles from cameras mounted on trains for detecting near-miss events. To detect and localize vehicles at level crossings we extract patches from an image and classify each patch for detecting vehicles. We developed a region proposals algorithm for generating patches, and we use a Convolutional Neural Network (CNN) for classifying each patch. To localize vehicles in images we combine the patches that are classified as vehicles according to their CNN scores and positions. We compared our system with the Deformable Part Models (DPM) and Regions with CNN features (R-CNN) object detectors. Experimental results on a railway dataset show that the recall rate of our proposed system is 29% higher than what can be achieved with DPM or R-CNN detectors.
Resumo:
Our aim is to examine evidence-based strategies to motivate appropriate action and increase informed decision-making during the response and recovery phases of disasters. We combine expertise in communication, consumer psychology and marketing, disaster and emergency management, and law. This poster presents findings from a social media work package, and preliminary findings from the focus group work package on emergency warning message comprehension.
Resumo:
Systematic reviews and meta-analyses are used to combine results across studies to determine an overall effect. Meta-analysis is especially useful for combining evidence to inform social policy, but meta-analyses of applied social science research may encounter practical issues arising from the nature of the research domain. The current paper identifies potential resolutions to four issues that may be encountered in systematic reviews and meta-analyses in social research. The four issues are: scoping and targeting research questions appropriate for meta-analysis; selecting eligibility criteria where primary studies vary in research design and choice of outcome measures; dealing with inconsistent reporting in primary studies; and identifying sources of heterogeneity with multiple confounded moderators. The paper presents an overview of each issue with a review of potential resolutions, identified from similar issues encountered in meta-analysis in medical and biological sciences. The discussion aims to share and improve methodology in systematic reviews and meta-analysis by promoting cross-disciplinary communication, that is, to encourage 'viewing through different lenses'.
Resumo:
In this article, we report the crystal structures of five halogen bonded co-crystals comprising quaternary ammonium cations, halide anions (Cl– and Br–), and one of either 1,2-, 1,3-, or 1,4-diiodotetrafluorobenzene (DITFB). Three of the co-crystals are chemical isomers: 1,4-DITFB[TEA-CH2Cl]Cl, 1,2-DITFB[TEA-CH2Cl]Cl, and 1,3-DITFB[TEA-CH2Cl]Cl (where TEA-CH2Cl is chloromethyltriethylammonium ion). In each structure, the chloride anions link DITFB molecules through halogen bonds to produce 1D chains propagating with (a) linear topology in the structure containing 1,4-DITFB, (b) zigzag topology with 60° angle of propagation in that containing 1,2-DITFB, and (c) 120° angle of propagation with 1,3-DITFB. While the individual chains have highly distinctive and different topologies, they combine through π-stacking of the DITFB molecules to produce remarkably similar overall arrangements of molecules. Structures of 1,4-DITFB[TEA-CH2Br]Br and 1,3-DITFB[TEA-CH2Br]Br are also reported and are isomorphous with their chloro/chloride analogues, further illustrating the robustness of the overall supramolecular architecture. The usual approach to crystal engineering is to make structural changes to molecular components to effect specific changes to the resulting crystal structure. The results reported herein encourage pursuit of a somewhat different approach to crystal engineering. That is, to investigate the possibilities for engineering the same overall arrangement of molecules in crystals while employing molecular components that aggregate with entirely different supramolecular connectivity.