86 resultados para Meyer, Marvin
Resumo:
Genome-wide association studies (GWAS) have identified multiple common genetic variants associated with an increased risk of prostate cancer (PrCa), but these explain less than one-third of the heritability. To identify further susceptibility alleles, we conducted a meta-analysis of four GWAS including 5953 cases of aggressive PrCa and 11 463 controls (men without PrCa). We computed association tests for approximately 2.6 million SNPs and followed up the most significant SNPs by genotyping 49 121 samples in 29 studies through the international PRACTICAL and BPC3 consortia. We not only confirmed the association of a PrCa susceptibility locus, rs11672691 on chromosome 19, but also showed an association with aggressive PrCa [odds ratio = 1.12 (95% confidence interval 1.03-1.21), P = 1.4 × 10(-8)]. This report describes a genetic variant which is associated with aggressive PrCa, which is a type of PrCa associated with a poorer prognosis.
Resumo:
Background Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and population health. The Global Burden of Disease (GBD) studies done in 1990 and 2000 have been the only studies to quantify non-fatal health outcomes across an exhaustive set of disorders at the global and regional level. Neither effort quantified uncertainty in prevalence or years lived with disability (YLDs). Methods Of the 291 diseases and injuries in the GBD cause list, 289 cause disability. For 1160 sequelae of the 289 diseases and injuries, we undertook a systematic analysis of prevalence, incidence, remission, duration, and excess mortality. Sources included published studies, case notification, population-based cancer registries, other disease registries, antenatal clinic serosurveillance, hospital discharge data, ambulatory care data, household surveys, other surveys, and cohort studies. For most sequelae, we used a Bayesian meta-regression method, DisMod-MR, designed to address key limitations in descriptive epidemiological data, including missing data, inconsistency, and large methodological variation between data sources. For some disorders, we used natural history models, geospatial models, back-calculation models (models calculating incidence from population mortality rates and case fatality), or registration completeness models (models adjusting for incomplete registration with health-system access and other covariates). Disability weights for 220 unique health states were used to capture the severity of health loss. YLDs by cause at age, sex, country, and year levels were adjusted for comorbidity with simulation methods. We included uncertainty estimates at all stages of the analysis. Findings Global prevalence for all ages combined in 2010 across the 1160 sequelae ranged from fewer than one case per 1 million people to 350 000 cases per 1 million people. Prevalence and severity of health loss were weakly correlated (correlation coefficient −0·37). In 2010, there were 777 million YLDs from all causes, up from 583 million in 1990. The main contributors to global YLDs were mental and behavioural disorders, musculoskeletal disorders, and diabetes or endocrine diseases. The leading specific causes of YLDs were much the same in 2010 as they were in 1990: low back pain, major depressive disorder, iron-deficiency anaemia, neck pain, chronic obstructive pulmonary disease, anxiety disorders, migraine, diabetes, and falls. Age-specific prevalence of YLDs increased with age in all regions and has decreased slightly from 1990 to 2010. Regional patterns of the leading causes of YLDs were more similar compared with years of life lost due to premature mortality. Neglected tropical diseases, HIV/AIDS, tuberculosis, malaria, and anaemia were important causes of YLDs in sub-Saharan Africa. Interpretation Rates of YLDs per 100 000 people have remained largely constant over time but rise steadily with age. Population growth and ageing have increased YLD numbers and crude rates over the past two decades. Prevalences of the most common causes of YLDs, such as mental and behavioural disorders and musculoskeletal disorders, have not decreased. Health systems will need to address the needs of the rising numbers of individuals with a range of disorders that largely cause disability but not mortality. Quantification of the burden of non-fatal health outcomes will be crucial to understand how well health systems are responding to these challenges. Effective and affordable strategies to deal with this rising burden are an urgent priority for health systems in most parts of the world. Funding Bill & Melinda Gates Foundation.
Resumo:
This study assessed the workday step counts of lower active (<10,000 daily steps) university employees using an automated, web-based walking intervention (Walk@Work). METHODS: Academic and administrative staff (n=390; 45.6±10.8years; BMI 27.2±5.5kg/m2; 290 women) at five campuses (Australia [x2], Canada, Northern Ireland and the United States), were given a pedometer, access to the website program (2010-11) and tasked with increasing workday walking by 1000 daily steps above baseline, every two weeks, over a six week period. Step count changes at four weeks post intervention were evaluated relative to campus and baseline walking. RESULTS: Across the sample, step counts significantly increased from baseline to post-intervention (1477 daily steps; p=0.001). Variations in increases were evident between campuses (largest difference of 870 daily steps; p=0.04) and for baseline activity status. Those least active at baseline (<5000 daily steps; n=125) increased step counts the most (1837 daily steps; p=0.001), whereas those most active (7500-9999 daily steps; n=79) increased the least (929 daily steps; p=0.001). CONCLUSIONS: Walk@Work increased workday walking by 25% in this sample overall. Increases occurred through an automated program, at campuses in different countries, and were most evident for those most in need of intervention.
Resumo:
This paper presents a new approach for the inclusion of human expert cognition into autonomous trajectory planning for unmanned aerial systems (UASs) operating in low-altitude environments. During typical UAS operations, multiple objectives may exist; therefore, the use of multicriteria decision aid techniques can potentially allow for convergence to trajectory solutions which better reflect overall mission requirements. In that context, additive multiattribute value theory has been applied to optimize trajectories with respect to multiple objectives. A graphical user interface was developed to allow for knowledge capture from a human decision maker (HDM) through simulated decision scenarios. The expert decision data gathered are converted into value functions and corresponding criteria weightings using utility additive theory. The inclusion of preferences elicited from HDM data within an automated decision system allows for the generation of trajectories which more closely represent the candidate HDM decision preferences. This approach has been demonstrated in this paper through simulation using a fixed-wing UAS operating in low-altitude environments.
Resumo:
Pricing greenhouse gas emissions is a burgeoning and possibly lucrative financial means for climate change mitigation. Emissions pricing is being used to fund emissions-abatement technologies and to modify land management to improve carbon sequestration and retention. Here we discuss the principal land-management options under existing and realistic future emissions-price legislation in Australia, and examine them with respect to their anticipated direct and indirect effects on biodiversity. The main ways in which emissions price-driven changes to land management can affect biodiversity are through policies and practices for (1) environmental plantings for carbon sequestration, (2) native regrowth, (3) fire management, (4) forestry, (5) agricultural practices (including cropping and grazing), and (6) feral animal control. While most land-management options available to reduce net greenhouse gas emissions offer clear advantages to increase the viability of native biodiversity, we describe several caveats regarding potentially negative outcomes, and outline components that need to be considered if biodiversity is also to benefit from the new carbon economy. Carbon plantings will only have real biodiversity value if they comprise appropriate native tree species and provide suitable habitats and resources for valued fauna. Such plantings also risk severely altering local hydrology and reducing water availability. Management of regrowth post-agricultural abandonment requires setting appropriate baselines and allowing for thinning in certain circumstances, and improvements to forestry rotation lengths would likely increase carbon-retention capacity and biodiversity value. Prescribed burning to reduce the frequency of high-intensity wildfires in northern Australia is being used as a tool to increase carbon retention. Fire management in southern Australia is not readily amenable for maximising carbon storage potential, but will become increasingly important for biodiversity conservation as the climate warms. Carbon price-based modifications to agriculture that would benefit biodiversity include reductions in tillage frequency and livestock densities, reductions in fertiliser use, and retention and regeneration of native shrubs; however, anticipated shifts to exotic perennial grass species such as buffel grass and kikuyu could have net negative implications for native biodiversity. Finally, it is unlikely that major reductions in greenhouse gas emissions arising from feral animal control are possible, even though reduced densities of feral herbivores will benefit Australian biodiversity greatly.
Resumo:
The purpose of this paper is to identify goal conflicts – both actual and potential – between climate and social policies in government strategies in response to the growing significance of climate change as a socioecological issue (IPCC 2007). Both social and climate policies are political responses to long-term societal trends related to capitalist development, industrialisation, and urbanisation (Koch, 2012). Both modify these processes through regulation, fiscal transfers and other measures, thereby affecting conditions for the other. This means that there are fields of tensions and synergies between social policy and climate change policy. Exploring these tensions and synergies is an increasingly important task for navigating genuinely sustainable development. Gough et al (2008) highlight three potential synergies between social and climate change policies: First, income redistribution – a traditional concern of social policy – can facilitate use of and enhance efficiency of carbon pricing. A second area of synergy is housing, transport, urban policies and community development, which all have potential to crucially contribute towards reducing carbon emissions. Finally, climate change mitigation will require substantial and rapid shifts in producer and consumer behaviour. Land use planning policy is a critical bridge between climate change and social policy that provides a means to explore the tensions and synergies that are evolving within this context. This paper will focus on spatial planning as an opportunity to develop strategies to adapt to climate change, and reviews the challenges of such change. Land use and spatial planning involve the allocation of land and the design and control of spatial patterns. Spatial planning is identified as being one of the most effective means of adapting settlements in response to climate change (Hurlimann and March, 2012). It provides the instrumental framework for adaptation (Meyer, et al., 2010) and operates as both a mechanism to achieve adaptation and a forum to negotiate priorities surrounding adaptation (Davoudi, et al., 2009). The acknowledged role of spatial planning in adaptation however has not translated into comparably significant consideration in planning literature (Davoudi, et al., 2009; Hurlimann and March, 2012). The discourse on adaptation specifically through spatial planning is described as ‘missing’ and ‘subordinate’ in national adaptation plans (Greiving and Fleischhauer, 2012),‘underrepresented’ (Roggema, et al., 2012)and ‘limited and disparate’ in planning literature (Davoudi, et al., 2009). Hurlimann and March (2012) suggest this may be due to limited experiences of adaptation in developed nations while Roggema et al. (2012) and Crane and Landis (2010) suggest it is because climate change is a wicked problem involving an unfamiliar problem, various frames of understanding and uncertain solutions. The potential for goal conflicts within this policy forum seem to outweigh the synergies. Yet, spatial planning will be a critical policy tool in the future to both protect and adapt communities to climate change.
Resumo:
Recent experiments [F. E. Pinkerton, M. S. Meyer, G. P. Meisner, M. P. Balogh, and J. J. Vajo, J. Phys. Chem. C 111, 12881 (2007) and J. J. Vajo and G. L. Olson, Scripta Mater. 56, 829 (2007)] demonstrated that the recycling of hydrogen in the coupled LiBH4/MgH2 system is fully reversible. The rehydrogenation of MgB2 is an important step toward the reversibility. By using ab initio density functional theory calculations, we found that the activation barrier for the dissociation of H2 are 0.49 and 0.58 eV for the B and Mg-terminated MgB2(0001) surface, respectively. This implies that the dissociation kinetics of H2 on a MgB2 (0001) surface should be greatly improved compared to that in pure Mg materials. Additionally, the diffusion of dissociated H atom on the Mg-terminated MgB2(0001) surface is almost barrier-less. Our results shed light on the experimentally-observed reversibility and improved kinetics for the coupled LiBH4/MgH2 system.
Resumo:
We conducted a large-scale association study to identify genes that influence nonfamilial breast cancer risk using a collection of German cases and matched controls and >25,000 single nucleotide polymorphisms located within 16,000 genes. One of the candidate loci identified was located on chromosome 19p13.2 [odds ratio (OR) = 1.5, P = 0.001]. The effect was substantially stronger in the subset of cases with reported family history of breast cancer (OR = 3.4, P = 0.001). The finding was subsequently replicated in two independent collections (combined OR = 1.4, P < 0.001) and was also associated with predisposition to prostate cancer in an independent sample set of prostate cancer cases and matched controls (OR = 1.4, P = 0.002). High-density single nucleotide polymorphism mapping showed that the extent of association spans 20 kb and includes the intercellular adhesion molecule genes ICAM1, ICAM4, and ICAM5. Although genetic variants in ICAM5 showed the strongest association with disease status, ICAM1 is expressed at highest levels in normal and tumor breast tissue. A variant in ICAM5 was also associated with disease progression and prognosis. Because ICAMs are suitable targets for antibodies and small molecules, these findings may not only provide diagnostic and prognostic markers but also new therapeutic opportunities in breast and prostate cancer.
Resumo:
Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.
Resumo:
This project develops and evaluates a model of curriculum design that aims to assist student learning of foundational disciplinary ‘Threshold Concepts’. The project uses phenomenographic action research, cross-institutional peer collaboration and the Variation Theory of Learning to develop and trial the model. Two contrasting disciplines (Physics and Law) and four institutions (two research-intensive and two universities of technology) were involved in the project, to ensure broad applicability of the model across different disciplines and contexts. The Threshold Concepts that were selected for curriculum design attention were measurement uncertainty in Physics and legal reasoning in Law. Threshold Concepts are key disciplinary concepts that are inherently troublesome, transformative and integrative in nature. Once understood, such concepts transform students’ views of the discipline because they enable students to coherently integrate what were previously seen as unrelated aspects of the subject, providing new ways of thinking about it (Meyer & Land 2003, 2005, 2006; Land et al. 2008). However, the integrative and transformative nature of such threshold concepts make them inherently difficult for students to learn, with resulting misunderstandings of concepts being prevalent...
Resumo:
Process models specify behavioral aspects by describing ordering constraints between tasks which must be accomplished to achieve envisioned goals. Tasks usually exchange information by means of data objects, i.e., by writing information to and reading information from data objects. A data object can be characterized by its states and allowed state transitions. In this paper, we propose a notion which checks conformance of a process model with respect to data objects that its tasks access. This new notion can be used to tell whether in every execution of a process model each time a task needs to access a data object in a particular state, it is ensured that the data object is in the expected state or can reach the expected state and, hence, the process model can achieve its goals.