906 resultados para computational cost
Resumo:
Hospitals invest considerable resources organizing operating suites and having surgeons and theatre staff available on an agreed schedule. A common impediment to efficiency is perioperative delay,including delays getting to the operating room or during the operation. Perioperative delays entail significant costs for hospitals,wasting staff time and operating theatre resources. They may also affect patient outcomes; prolonged surgery is a predictor for unanticipated admission following elective ambulatory surgery...
Resumo:
Increased focus on energy cost savings and carbon footprint reduction efforts improved the visibility of building energy simulation, which became a mandatory requirement of several building rating systems. Despite developments in building energy simulation algorithms and user interfaces, there are some major challenges associated with building energy simulation; an important one is the computational demands and processing time. In this paper, we analyze the opportunities and challenges associated with this topic while executing a set of 275 parametric energy models simultaneously in EnergyPlus using a High Performance Computing (HPC) cluster. Successful parallel computing implementation of building energy simulations will not only improve the time necessary to get the results and enable scenario development for different design considerations, but also might enable Dynamic-Building Information Modeling (BIM) integration and near real-time decision-making. This paper concludes with the discussions on future directions and opportunities associated with building energy modeling simulations.
Resumo:
Computational epigenetics is a new area of research focused on exploring how DNA methylation patterns affect transcription factor binding that affect gene expression patterns. The aim of this study was to produce a new protocol for the detection of DNA methylation patterns using computational analysis which can be further confirmed by bisulfite PCR with serial pyrosequencing. The upstream regulatory element and pre-initiation complex relative to CpG islets within the methylenetetrahydrofolate reductase gene were determined via computational analysis and online databases. The 1,104 bp long CpG island located near to or at the alternative promoter site of methylenetetrahydrofolate reductase gene was identified. The CpG plot indicated that CpG islets A and B, within the island, contained 62 and 75 % GC content CpG ratios of 0.70 and 0.80–0.95, respectively. Further exploration of the CpG islets A and B indicates that the transcription start sites were GGC which were absent from the TATA boxes. In addition, although six PROSITE motifs were identified in CpG B, no motifs were detected in CpG A. A number of cis-regulatory elements were found in different regions within the CpGs A and B. Transcription factors were predicted to bind to CpGs A and B with varying affinities depending on the DNA methylation status. In addition, transcription factor binding may influence the expression patterns of the methylenetetrahydrofolate reductase gene by recruiting chromatin condensation inducing factors. These results have significant implications for the understanding of the architecture of transcription factor binding at CpG islets as well as DNA methylation patterns that affect chromatin structure.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
Ever growing populations in cities are associated with a major increase in road vehicles and air pollution. The overall high levels of urban air pollution have been shown to be of a significant risk to city dwellers. However, the impacts of very high but temporally and spatially restricted pollution, and thus exposure, are still poorly understood. Conventional approaches to air quality monitoring are based on networks of static and sparse measurement stations. However, these are prohibitively expensive to capture tempo-spatial heterogeneity and identify pollution hotspots, which is required for the development of robust real-time strategies for exposure control. Current progress in developing low-cost micro-scale sensing technology is radically changing the conventional approach to allow real-time information in a capillary form. But the question remains whether there is value in the less accurate data they generate. This article illustrates the drivers behind current rises in the use of low-cost sensors for air pollution management in cities, whilst addressing the major challenges for their effective implementation.
Resumo:
This paper develops a dynamic model for cost-effective selection of sites for restoring biodiversity when habitat quality develops over time and is uncertain. A safety-first decision criterion is used for ensuring a minimum level of habitats, and this is formulated in a chance-constrained programming framework. The theoretical results show; (i) inclusion of quality growth reduces overall cost for achieving a future biodiversity target from relatively early establishment of habitats, but (ii) consideration of uncertainty in growth increases total cost and delays establishment, and (iii) cost-effective trading of habitat requires exchange rate between sites that varies over time. An empirical application to the red listed umbrella species - white-backed woodpecker - shows that the total cost of achieving habitat targets specified in the Swedish recovery plan is doubled if the target is to be achieved with high reliability, and that equilibrating price on a habitat trading market differs considerably between different quality growth combinations. © 2013 Elsevier GmbH.
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
Individuals with limb amputation fitted with conventional socket-suspended prostheses often experience socket-related discomfort leading to a significant decrease in quality of life. Bone-anchored prostheses are increasingly acknowledged as viable alternative method of attachment of artificial limb. In this case, the prosthesis is attached directly to the residual skeleton through a percutaneous fixation. To date, a few osseointegration fixations are commercially available. Several devices are at different stages of development particularly in Europe and the US. [1-15] Clearly, surgical procedures are currently blooming worldwide. Indeed, Australia and Queensland, in particular, have one of the fastest growing populations. Previous studies involving either screw-type implants or press-fit fixations for bone-anchorage have focused on biomechanics aspects as well as the clinical benefits and safety of the procedure. [16-25] In principle, bone-anchored prostheses should eliminate lifetime expenses associated with sockets and, consequently, potentially alleviate the financial burden of amputation for governmental organizations. Sadly, publications focusing on cost-effectiveness are sparse. In fact, only one study published by Haggstrom et al (2012), reported that “despite significantly fewer visits for prosthetic service the annual mean costs for osseointegrated prostheses were comparable with socket-suspended prostheses”.[26] Consequently, governmental organizations such as Queensland Artificial Limb Services (QALS) are facing a number of challenges while adjusting financial assistance schemes that should be fair and equitable to their clients fitted with bone-anchored prostheses. Clearly, more scientific evidence extracted from governmental databases is needed to further consolidate the analyses of financial burden associated with both methods of attachment (i.e., conventional sockets prostheses, bone-anchored prostheses). The purposes of the presentation will be: 1. To outline methodological avenues to assess the cost-effectiveness of bone-anchored prostheses compared to conventional sockets prostheses, 2. To highlight the potential obstacles and limitations in cost-effectiveness analyses of bone-anchored prostheses, 3. To present preliminary results of a cost-comparison analysis focusing on the comparison of the costs expressed in dollars over QALS funding cycles for both methods of attachment.
Resumo:
Characterization of the epigenetic profile of humans since the initial breakthrough on the human genome project has strongly established the key role of histone modifications and DNA methylation. These dynamic elements interact to determine the normal level of expression or methylation status of the constituent genes in the genome. Recently, considerable evidence has been put forward to demonstrate that environmental stress implicitly alters epigenetic patterns causing imbalance that can lead to cancer initiation. This chain of consequences has motivated attempts to computationally model the influence of histone modification and DNA methylation in gene expression and investigate their intrinsic interdependency. In this paper, we explore the relation between DNA methylation and transcription and characterize in detail the histone modifications for specific DNA methylation levels using a stochastic approach.
Resumo:
Over the last few years, investigations of human epigenetic profiles have identified key elements of change to be Histone Modifications, stable and heritable DNA methylation and Chromatin remodeling. These factors determine gene expression levels and characterise conditions leading to disease. In order to extract information embedded in long DNA sequences, data mining and pattern recognition tools are widely used, but efforts have been limited to date with respect to analyzing epigenetic changes, and their role as catalysts in disease onset. Useful insight, however, can be gained by investigation of associated dinucleotide distributions. The focus of this paper is to explore specific dinucleotides frequencies across defined regions within the human genome, and to identify new patterns between epigenetic mechanisms and DNA content. Signal processing methods, including Fourier and Wavelet Transformations, are employed and principal results are reported.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.