182 resultados para Paciaudi, Paolo, 1710-1785
Resumo:
We report photometric observations for comet C/2012 S1 (ISON) obtained during the time period immediately after discovery (r = 6.28 AU) until it moved into solar conjunction in mid-2013 June using the UH2.2 m, and Gemini North 8 m telescopes on Mauna Kea, the Lowell 1.8 m in Flagstaff, the Calar Alto 1.2 m telescope in Spain, the VYSOS-5 telescopes on Mauna Loa Hawaii and data from the CARA network. Additional pre-discovery data from the Pan STARRS1 survey extends the light curve back to 2011 September 30 (r = 9.4 AU). The images showed a similar tail morphology due to small micron sized particles throughout 2013. Observations at submillimeter wavelengths using the James Clerk Maxwell Telescope on 15 nights between 2013 March 9 (r = 4.52 AU) and June 16 (r = 3.35 AU) were used to search for CO and HCN rotation lines. No gas was detected, with upper limits for CO ranging between 3.5-4.5 × 1027 molecules s-1. Combined with published water production rate estimates we have generated ice sublimation models consistent with the photometric light curve. The inbound light curve is likely controlled by sublimation of CO2. At these distances water is not a strong contributor to the outgassing. We also infer that there was a long slow outburst of activity beginning in late 2011 peaking in mid-2013 January (r ~ 5 AU) at which point the activity decreased again through 2013 June. We suggest that this outburst was driven by CO injecting large water ice grains into the coma. Observations as the comet came out of solar conjunction seem to confirm our models.
Resumo:
Structured parallel programming is recognised as a viable and effective means of tackling parallel programming problems. Recently, a set of simple and powerful parallel building blocks RISC pb2l) has been proposed to support modelling and implementation of parallel frameworks. In this work we demonstrate how that same parallel building block set may be used to model both general purpose parallel programming abstractions, not usually listed in classical skeleton sets, and more specialized domain specific parallel patterns. We show how an implementation of RISC pb2 l can be realised via the FastFlow framework and present experimental evidence of the feasibility and efficiency of the approach.
Resumo:
This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.
Resumo:
Obesity has been linked with elevated levels of C-reactive protein (CRP), and both have been associated with increased risk of mortality and cardiovascular disease (CVD). Previous studies have used a single ‘baseline’ measurement and such analyses cannot account for possible changes in these which may lead to a biased estimation of risk. Using four cohorts from CHANCES which had repeated measures in participants 50 years and older, multivariate time-dependent Cox proportional hazards was used to estimate hazard ratios (HR) and 95 % confidence intervals (CI) to examine the relationship between body mass index (BMI) and CRP with all-cause mortality and CVD. Being overweight (≥25–<30 kg/m2) or moderately obese (≥30–<35) tended to be associated with a lower risk of mortality compared to normal (≥18.5–<25): ESTHER, HR (95 % CI) 0.69 (0.58–0.82) and 0.78 (0.63–0.97); Rotterdam, 0.86 (0.79–0.94) and 0.80 (0.72–0.89). A similar relationship was found, but only for overweight in Glostrup, HR (95 % CI) 0.88 (0.76–1.02); and moderately obese in Tromsø, HR (95 % CI) 0.79 (0.62–1.01). Associations were not evident between repeated measures of BMI and CVD. Conversely, increasing CRP concentrations, measured on more than one occasion, were associated with an increasing risk of mortality and CVD. Being overweight or moderately obese is associated with a lower risk of mortality, while CRP, independent of BMI, is positively associated with mortality and CVD risk. If inflammation links CRP and BMI, they may participate in distinct/independent pathways. Accounting for independent changes in risk factors over time may be crucial for unveiling their effects on mortality and disease morbidity.
Resumo:
Many parts of the UK’s rail network were constructed in the mid-19th century long before the advent of modern construction standards. Historic levels of low investment, poor maintenance strategies and the deleterious effects of climate change have resulted in critical elements of the rail network being at significant risk of failure. The majority of failures which have occurred over recent years have been triggered by extreme weather events. Advance assessment and remediation of earthworks is, however, significantly less costly than dealing with failures reactively. It is therefore crucial that appropriate approaches for assessment of the stability of earthworks are developed, so that repair work can be better targeted and failures avoided wherever possible. This extended abstract briefly discusses some preliminary results from an ongoing geophysical research project being carried out in order to study the impact of climate or seasonal weather variations on the stability of a century old railway embankment on the Gloucestershire Warwickshire steam railway line in Southern England.
Resumo:
Prevalence estimations for Autism Spectrum Disorder have been increasing over the past few years with rates now reported to be at 1:68. Interventions that are based on Applied Behaviour Analysis are significantly related to best outcomes and are widely considered ‘treatment as usual’ in North America. In Europe, this is not the case, instead a rather ill-defined ‘eclectic’ approach is widely promoted and in this paper we discuss some of the roots of this gulf between Europe and North America and correct some of the misconceptions that prevail about Applied Behaviour Analysis in Europe.
Resumo:
A natural subgroup (that we refer to as Saccharomyces uvarum) was identified, within the heterogeneous species Saccharomyces bayanus. The typical electrophoretic karyotype, interfertility of hybrids between strains, distinctive sugar fermentation pattern, and uniform fermentation characteristics in must, indicated that this subgroup was not only highly homogeneous, but also clearly distinguishable from other species within the Saccharomyces sensu stricto group. Investigation of the S. bayanus type strain and other strains that have been classified as S. bayanus, confirmed the apparent lack of homogeneity and, in some cases, supported the hypothesis that they are natural hybrids. Copyright (C) 1999 Federation of European Microbiological Societies.
Resumo:
Using genome-wide data from 253,288 individuals, we identified 697 variants at genome-wide significance that together explained one-fifth of the heritability for adult height. By testing different numbers of variants in independent studies, we show that the most strongly associated 1/42,000, 1/43,700 and 1/49,500 SNPs explained 1/421%, 1/424% and 1/429% of phenotypic variance. Furthermore, all common variants together captured 60% of heritability. The 697 variants clustered in 423 loci were enriched for genes, pathways and tissue types known to be involved in growth and together implicated genes and pathways not highlighted in earlier efforts, such as signaling by fibroblast growth factors, WNT/I 2-catenin and chondroitin sulfate-related genes. We identified several genes and pathways not previously connected with human skeletal growth, including mTOR, osteoglycin and binding of hyaluronic acid. Our results indicate a genetic architecture for human height that is characterized by a very large but finite number (thousands) of causal variants.
Resumo:
Critical phenomena involve structural changes in the correlations of its constituents. Such changes can be reproduced and characterized in quantum simulators able to tackle medium-to-large-size systems. We demonstrate these concepts by engineering the ground state of a three-spin Ising ring by using a pair of entangled photons. The effect of a simulated magnetic field, leading to a critical modification of the correlations within the ring, is analysed by studying two- and three-spin entanglement. In particular, we connect the violation of a multipartite Bell inequality with the amount of tripartite entanglement in our ring.
Resumo:
Biodiversity continues to decline in the face of increasing anthropogenic pressures such as habitat destruction, exploitation, pollution and introduction of alien species. Existing global databases of species' threat status or population time series are dominated by charismatic species. The collation of datasets with broad taxonomic and biogeographic extents, and that support computation of a range of biodiversity indicators, is necessary to enable better understanding of historical declines and to project - and avert - future declines. We describe and assess a new database of more than 1.6 million samples from 78 countries representing over 28,000 species, collated from existing spatial comparisons of local-scale biodiversity exposed to different intensities and types of anthropogenic pressures, from terrestrial sites around the world. The database contains measurements taken in 208 (of 814) ecoregions, 13 (of 14) biomes, 25 (of 35) biodiversity hotspots and 16 (of 17) megadiverse countries. The database contains more than 1% of the total number of all species described, and more than 1% of the described species within many taxonomic groups - including flowering plants, gymnosperms, birds, mammals, reptiles, amphibians, beetles, lepidopterans and hymenopterans. The dataset, which is still being added to, is therefore already considerably larger and more representative than those used by previous quantitative models of biodiversity trends and responses. The database is being assembled as part of the PREDICTS project (Projecting Responses of Ecological Diversity In Changing Terrestrial Systems - http://www.predicts.org.uk). We make site-level summary data available alongside this article. The full database will be publicly available in 2015.
Resumo:
We introduce a new parallel pattern derived from a specific application domain and show how it turns out to have application beyond its domain of origin. The pool evolution pattern models the parallel evolution of a population subject to mutations and evolving in such a way that a given fitness function is optimized. The pattern has been demonstrated to be suitable for capturing and modeling the parallel patterns underpinning various evolutionary algorithms, as well as other parallel patterns typical of symbolic computation. In this paper we introduce the pattern, we discuss its implementation on modern multi/many core architectures and finally present experimental results obtained with FastFlow and Erlang implementations to assess its feasibility and scalability.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.