341 resultados para Compile


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallelizing compilers have difficulty analysing and optimising complex code. To address this, some analysis may be delayed until run-time, and techniques such as speculative execution used. Furthermore, to enhance performance, a feedback loop may be setup between the compile time and run-time analysis systems, as in iterative compilation. To extend this, it is proposed that the run-time analysis collects information about the values of variables not already determined, and estimates a probability measure for the sampled values. These measures may be used to guide optimisations in further analyses of the program. To address the problem of variables with measures as values, this paper also presents an outline of a novel combination of previous probabilistic denotational semantics models, applied to a simple imperative language.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the results of a 10.5-yr, volume-limited (28-Mpc) search for supernova (SN) progenitor stars. In doing so we compile all SNe discovered within this volume (132, of which 27 per cent are Type Ia) and determine the relative rates of each subtype from literature studies. The core-collapse SNe break down into 59 per cent II-P and 29 per cent Ib/c, with the remainder being IIb (5 per cent), IIn (4 per cent) and II-L (3 per cent). There have been 20 II-P SNe with high-quality optical or near-infrared pre-explosion images that allow a meaningful search for the progenitor stars. In five cases they are clearly red supergiants, one case is unconstrained, two fall on compact coeval star clusters and the other twelve have no progenitor detected. We review and update all the available data for the host galaxies and SN environments (distance, metallicity and extinction) and determine masses and upper mass estimates for these 20 progenitor stars using the STARS stellar evolutionary code and a single consistent homogeneous method. A maximum likelihood calculation suggests that the minimum stellar mass for a Type II-P to form is m(min) = 8.5(-1.5)(+1) M-circle dot and the maximum mass for II-P progenitors is m(max) = 16.5 +/- 1.5 M-circle dot, assuming a Salpeter initial mass function holds for the progenitor population (in the range Gamma = -1.35(-0.7)(+0.3)). The minimum mass is consistent with current estimates for the upper limit to white dwarf progenitor masses, but the maximum mass does not appear consistent with massive star populations in Local Group galaxies. Red supergiants in the Local Group have masses up to 25 M-circle dot and the minimum mass to produce a Wolf-Rayet star in single star evolution (between solar and LMC metallicity) is similarly 25-30 M-circle dot. The reason we have not detected any high-mass red supergiant progenitors above 17 M-circle dot is unclear, but we estimate that it is statistically significant at 2.4 sigma confidence. Two simple reasons for this could be that we have systematically underestimated the progenitor masses due to dust extinction or that stars between 17-25 M-circle dot produce other kinds of SNe which are not II-P. We discuss these possibilities and find that neither provides a satisfactory solution. We term this discrepancy the 'red supergiant problem' and speculate that these stars could have core masses high enough to form black holes and SNe which are too faint to have been detected. We compare the Ni-56 masses ejected in the SNe to the progenitor mass estimates and find that low-luminosity SNe with low Ni-56 production are most likely to arise from explosions of low-mass progenitors near the mass threshold that can produce a core-collapse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A scalable large vocabulary, speaker independent speech recognition system is being developed using Hidden Markov Models (HMMs) for acoustic modeling and a Weighted Finite State Transducer (WFST) to compile sentence, word, and phoneme models. The system comprises a software backend search and an FPGA-based Gaussian calculation which are covered here. In this paper, we present an efficient pipelined design implemented both as an embedded peripheral and as a scalable, parallel hardware accelerator. Both architectures have been implemented on an Alpha Data XRC-5T1, reconfigurable computer housing a Virtex 5 SX95T FPGA. The core has been tested and is capable of calculating a full set of Gaussian results from 3825 acoustic models in 9.03 ms which coupled with a backend search of 5000 words has provided an accuracy of over 80%. Parallel implementations have been designed with up to 32 cores and have been successfully implemented with a clock frequency of 133?MHz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The supernova SN 2001du was discovered in the galaxy NGC 1365 at a distance of 19 +/- 2 Mpc, and is a core-collapse event of Type II-P. Images of this galaxy, of moderate depth, have been taken with the Hubble Space Telescope approximately 6.6 yr before discovery and include the supernova position on the WFPC2 field of view. We have observed the supernova with the WFPC2 to allow accurate differential astrometry of SN 2001du on the pre-explosion frames. As a core-collapse event it is expected that the progenitor was a massive, luminous star. There is a marginal detection (3sigma) of a source close to the supernova position on the pre-discovery V -band frame, but it is not precisely coincident and we do not believe it to be a robust detection of a point source. We conclude that there is no stellar progenitor at the supernova position and derive sensitivity limits of the pre-discovery images that provide an upper mass limit for the progenitor star. We estimate that the progenitor had a mass of less than 15 M-circle dot . We revisit two other nearby Type II-P supernovae that have high-quality pre-explosion images, and refine the upper mass limits for the progenitor stars. Using a new distance determination for SN 1999gi from the expanding photosphere method, we revise the upper mass limit to 12 M-circle dot . We present new HST images of the site of SN 1999em, which validate the use of lower spatial resolution ground-based images in the progenitor studies and use a new Cepheid distance to the galaxy to measure an upper mass limit of 15 M-circle dot for that progenitor. Finally we compile all the direct information available for the progenitors of eight nearby core-collapse supernovae and compare their mass estimates. These are compared with the latest stellar evolutionary models of pre-supernova evolution which have attempted to relate metallicity and mass to the supernovae type. Although this is statistically limited at present, reasonable agreement is already found for the lower-mass events (generally the II-P), but some discrepancies appear at higher masses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tuning and stacking approaches have been used to compile non-annually resolved peatland palaeo-water table records in several studies. This approach has been proposed as a potential way forward to overcome the chronological problems that beset the correlation of records and may help in the upscaling of palaeoclimate records for climate model-data comparisons. This paper investigates the uncertainties in this approach using a published water table compilation from Northern Ireland. Firstly, three plausible combinations of chronological match points are used to assess the variability of the reconstructions. It is apparent that even with markedly different match point combinations, the compilations are highly similar, especially when a 100-year running mean line is used for interpretation. Secondly, sample-specific reconstruction errors are scaled in relation to the standardised water table units and illustrated on the compiled reconstruction. Thirdly, the total chronological errors for each reconstruction are calculated using Bayesian age-modelling software. Although tuning and stacking approaches may be suitable for compiling peat-based palaeoclimate records, it is important that the reconstruction and chronological errors are acknowledged and clearly illustrated in future studies. The tuning of peat-based proxy climate records is based on a potentially flawed assumption that events are synchronous between sites. © 2011 Elsevier Ltd and INQUA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inner city, confined site construction is quickly becoming the norm within the construction sector. The aim of this paper is to identify and document the effect, if any, that a confined construction site environment has on the productivity of on-site personnel. In order to compile the relevant information and attain appropriate results on the matter in question, a qualitative analytical approach is adopted. This process incorporates multiple cases studies from Ireland, Northern Ireland and USA. From the resulting case studies, a minimum of three individual interviews and focus group seminars are conducted to aid in the collection of the data while also assisting in the confirmation of the factors identified from a critique of the relevant literature. From the resulting case studies and discussions, a list of the key issues pertaining to the on-site productivity of personnel emerged and is documented as follows; 1) Overcrowding of personnel at workstations, 2) Lack of space for the effective movement of personnel on-site, 3) Numerous trades working within the one space on-site. Through identifying the issues highlighted and proactively mitigating or eliminating the factors detailed, on-site management professionals can strive to ensure maximum productivity from the industry’s most important resource – people.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Published records, original data from recent field work on all of the islands of the Azores (NE Atlantic), and a revision of the entire mollusc collection deposited in the Department of Biology of the University of the Azores (DBUA) were used to compile a checklist of the shallow-water Polyplacophora of the Azores. Lepidochitona cf. canariensis and Tonicella rubra are reported for the first time for this archipelago, increasing the recorded Azorean fauna to seven species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Heterogeneous computing technologies, such as multi-core CPUs, GPUs and FPGAs can provide significant performance improvements. However, developing applications for these technologies often results in coupling applications to specific devices, typically through the use of proprietary tools. This paper presents SHEPARD, a compile time and run-time framework that decouples application development from the target platform and enables run-time allocation of tasks to heterogeneous computing devices. Through the use of special annotated functions, called managed tasks, SHEPARD approximates a task's performance on available devices, and coupled with the approximation of current device demand, decides which device can satisfy the task with the lowest overall execution time. Experiments using a task parallel application, based on an in-memory database, demonstrate the opportunity for automatic run-time task allocation to achieve speed-up over a static allocation to a single specific device. © 2014 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the outstanding issues in parallel computing is the selection of task granularity. This work proposes a solution to the task granularity problem by lowering the overhead of the task scheduler and as such supporting very fine-grain tasks. Using a combination of static (compile-time) scheduling and dynamic (run-time) scheduling, we aim to make scheduling decisions as fast as with static scheduling while retaining the dynamic load- balancing properties of fully dynamic scheduling. We present an example application and discuss the requirements on the compiler and runtime system to realize hybrid static/dynamic scheduling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioenergy is a key component of the European Union long term energy strategy across all sectors, with a target contribution of up to 14% of the energy mix by 2020. It is estimated that there is the potential for 1TWh of primary energy from biogas per million persons in Europe, derived from agricultural by-products and waste. With an agricultural sector that accounts for 75% of land area and a large number of advanced engineering firms, Northern Ireland is a region with considerable potential for an integrated biogas industry. Northern Ireland is also heavily reliant on imported fossil fuels. Despite this, the industry is underdeveloped and there is a need for a collaborative approach from research, business and policy-makers across all sectors to optimise Northern Ireland’s abundant natural resources. ‘Developing Opportunities in Bio-Energy’ (i.e. Do Bioenergy) is a recently completed project that involved both academic and specialist industrial partners. The aim was to develop a biogas research action plan for 2020 to define priorities for intersectoral regional development, co-operation and knowledge transfer in the field of production and use of biogas. Consultations were held with regional stakeholders and working groups were established to compile supporting data, decide key objectives and implementation activities. Within the context of this study it was found that biogas from feedstocks including grass, agricultural slurry, household and industrial waste have the potential to contribute from 2.5% to 11% of Northern Ireland’s total energy consumption. The economics of on-farm production were assessed, along with potential markets and alternative uses for biogas in sectors such as transport, heat and electricity. Arising from this baseline data, a Do Bioenergy was developed. The plan sets out a strategic research agenda, and details priorities and targets for 2020. The challenge for Northern Ireland is how best to utilise the biogas – as electricity, heat or vehicle fuel and in what proportions. The research areas identified were: development of small scale solutions for biogas production and use; solutions for improved nutrient management; knowledge supporting and developing the integration of biogas into the rural economy; and future crops and bio-based products. The human resources and costs for the implementation were estimated as 80 person-years and £25 million respectively. It is also clear that the development of a robust bio-gas sector requires some reform of the regulatory regime, including a planning policy framework and a need to address social acceptance issues. The Action Plan was developed from a regional perspective but the results may be applicable to other regions in Europe and elsewhere. This paper presents the methodology, results and analysis, and discussion and key findings of the Do Bioenergy report for Northern Ireland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: While the discovery of new drugs is a complex, lengthy and costly process, identifying new uses for existing drugs is a cost-effective approach to therapeutic discovery. Connectivity mapping integrates gene expression profiling with advanced algorithms to connect genes, diseases and small molecule compounds and has been applied in a large number of studies to identify potential drugs, particularly to facilitate drug repurposing. Colorectal cancer (CRC) is a commonly diagnosed cancer with high mortality rates, presenting a worldwide health problem. With the advancement of high throughput omics technologies, a number of large scale gene expression profiling studies have been conducted on CRCs, providing multiple datasets in gene expression data repositories. In this work, we systematically apply gene expression connectivity mapping to multiple CRC datasets to identify candidate therapeutics to this disease.

RESULTS: We developed a robust method to compile a combined gene signature for colorectal cancer across multiple datasets. Connectivity mapping analysis with this signature of 148 genes identified 10 candidate compounds, including irinotecan and etoposide, which are chemotherapy drugs currently used to treat CRCs. These results indicate that we have discovered high quality connections between the CRC disease state and the candidate compounds, and that the gene signature we created may be used as a potential therapeutic target in treating the disease. The method we proposed is highly effective in generating quality gene signature through multiple datasets; the publication of the combined CRC gene signature and the list of candidate compounds from this work will benefit both cancer and systems biology research communities for further development and investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Although Plasmodium falciparum transmission frequently exhibits seasonal patterns, the drivers of malaria seasonality are often unclear. Given the massive variation in the landscape upon which transmission acts, intra-annual fluctuations are likely influenced by different factors in different settings. Further, the presence of potentially substantial inter-annual variation can mask seasonal patterns; it may be that a location has "strongly seasonal" transmission and yet no single season ever matches the mean, or synoptic, curve. Accurate accounting of seasonality can inform efficient malaria control and treatment strategies. In spite of the demonstrable importance of accurately capturing the seasonality of malaria, data required to describe these patterns is not universally accessible and as such localized and regional efforts at quantifying malaria seasonality are disjointed and not easily generalized. 

Methods: The purpose of this review was to audit the literature on seasonality of P. falciparum and quantitatively summarize the collective findings. Six search terms were selected to systematically compile a list of papers relevant to the seasonality of P. falciparum transmission, and a questionnaire was developed to catalogue the manuscripts. 

Results and discussion: 152 manuscripts were identified as relating to the seasonality of malaria transmission, deaths due to malaria or the population dynamics of mosquito vectors of malaria. Among these, there were 126 statistical analyses and 31 mechanistic analyses (some manuscripts did both). 

Discussion: Identified relationships between temporal patterns in malaria and climatological drivers of malaria varied greatly across the globe, with different drivers appearing important in different locations. Although commonly studied drivers of malaria such as temperature and rainfall were often found to significantly influence transmission, the lags between a weather event and a resulting change in malaria transmission also varied greatly by location. 

Conclusions: The contradicting results of studies using similar data and modelling approaches from similar locations as well as the confounding nature of climatological covariates underlines the importance of a multi-faceted modelling approach that attempts to capture seasonal patterns at both small and large spatial scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate execution is a viable technique for environments with energy constraints, provided that applications are given the mechanisms to produce outputs of the highest possible quality within the available energy budget. This paper introduces a framework for energy-constrained execution with controlled and graceful quality loss. A simple programming model allows developers to structure the computation in different tasks, and to express the relative importance of these tasks for the quality of the end result. For non-significant tasks, the developer can also supply less costly, approximate versions. The target energy consumption for a given execution is specified when the application is launched. A significance-aware runtime system employs an application-specific analytical energy model to decide how many cores to use for the execution, the operating frequency for these cores, as well as the degree of task approximation, so as to maximize the quality of the output while meeting the user-specified energy constraints. Evaluation on a dual-socket 16-core Intel platform using 9 benchmark kernels shows that the proposed framework picks the optimal configuration with high accuracy. Also, a comparison with loop perforation (a well-known compile-time approximation technique), shows that the proposed framework results in significantly higher quality for the same energy budget.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Revenue Management’s most cited definitions is probably “to sell the right accommodation to the right customer, at the right time and the right price, with optimal satisfaction for customers and hoteliers”. Smart Revenue Management (SRM) is a project, which aims the development of smart automatic techniques for an efficient optimization of occupancy and rates of hotel accommodations, commonly referred to, as revenue management. One of the objectives of this project is to demonstrate that the collection of Big Data, followed by an appropriate assembly of functionalities, will make possible to generate a Data Warehouse necessary to produce high quality business intelligence and analytics. This will be achieved through the collection of data extracted from a variety of sources, including from the web. This paper proposes a three stage framework to develop the Big Data Warehouse for the SRM. Namely, the compilation of all available information, in the present case, it was focus only the extraction of information from the web by a web crawler – raw data. The storing of that raw data in a primary NoSQL database, and from that data the conception of a set of functionalities, rules, principles and semantics to select, combine and store in a secondary relational database the meaningful information for the Revenue Management (Big Data Warehouse). The last stage will be the principal focus of the paper. In this context, clues will also be giving how to compile information for Business Intelligence. All these functionalities contribute to a holistic framework that, in the future, will make it possible to anticipate customers and competitor’s behavior, fundamental elements to fulfill the Revenue Management

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com a presente dissertação pretende-se analisar alguns dos problemas associados aos edifícios altos, na fase de projeto, bem como compilar um conjunto de informações e conhecimentos científicos sobre a área abordada. São descritas algumas soluções de sistemas estruturais possíveis de idealizar para edifícios altos. Posteriormente, tendo por base um projeto de estruturas de um edifício com 25 pisos localizado em Luanda, capital de Angola, o objetivo consistiu em analisar estática e dinamicamente o seu comportamento quando solicitado fundamentalmente pelas ações do vento e dos sismos. A análise estrutural foi realizada com recurso a dois softwares de cálculo automático, nomeadamente, o Cypecad e o Robot Structural Analysis Professional e pelos métodos preconizados no Regulamento de Segurança e Ações para estruturas de edifícios e pontes e o Eurocódigo 8 – “Projeto de estruturas para resistência aos sismos”. Aborda-se a temática do faseamento construtivo, assunto que revela algumas limitações dos programas de cálculo utilizados, sendo descrito um método simplificado para prever os seus efeitos em termos de dimensionamento final. Os resultados obtidos permitiram avaliar o bom comportamento da estrutura no que respeita ao cumprimento dos estados limites últimos e de serviço. Conclui-se que o sistema estrutural adotado no modelo em estudo se encontra bem dimensionado relativamente ao colapso e à limitação de danos.