957 resultados para Cost Modelling
Resumo:
We perform a detailed modelling of the post-outburst surface emission of the low magnetic field magnetar SGR 0418+5729. The dipolar magnetic field of this source, B=6×1012G estimated from its spin-down rate, is in the observed range of magnetic fields for normal pulsars. The source is further characterized by a high pulse fraction and a single-peak profile. Using synthetic temperature distribution profiles, and fully accounting for the general-relativistic effects of light deflection and gravitational redshift, we generate synthetic X-ray spectra and pulse profiles that we fit to the observations. We find that asymmetric and symmetric surface temperature distributions can reproduce equally well the observed pulse profiles and spectra of SGR 0418. None the less, the modelling allows us to place constraints on the system geometry (i.e. the angles ψ and ξ that the rotation axis makes with the line of sight and the dipolar axis, respectively), as well as on the spot size and temperature contrast on the neutron star surface. After performing an analysis iterating between the pulse profile and spectra, as done in similar previous works, we further employed, for the first time in this context, a Markov-Chain Monte Carlo approach to extract constraints on the model parameters from the pulse profiles and spectra, simultaneously. We find that, to reproduce the observed spectrum and flux modulation: (a) the angles must be restricted to 65° ≲ ψ + ξ ≲ 125° or 235° ≲ ψ + ξ ≲ 295°; (b) the temperature contrast between the poles and the equator must be at least a factor of ∼6, and (c) the size of the hottest region ranges between 0.2 and 0.7 km (including uncertainties on the source distance). Lastly, we interpret our findings within the context of internal and external heating models.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
It has been widely documented that when Building Information Modelling (BIM) is used, there is a shift in effort to the design phase. Little investigation into the impact of this shift in effort has been done and how it impacts on costs. It can be difficult to justify the increased expenditure on BIM in a market that is heavily driven by costs. There are currently studies attempting to quantify the return on investment (ROI) for BIM for which these returns can be seen to balance out the shift in efforts and costs to the design phase. The studies however quantify the ROI based on the individual stakeholder’s investment without consideration for the impact that the use of BIM from their project partners may have on their own profitability. In this study, a questionnaire investigated opinions and experience of construction professionals, representing clients, consultants, designers and contractors, to determine fluctuations in costs by their magnitude and when they occur. These factors were examined more closely by interviewing senior members representing each of the stakeholder categories and comparing their experience in using BIM within environments where their project partners were also using BIM and when they were not. This determined the differences in how the use and the investment in BIM impacts on others and how costs are redistributed. This redistribution is not just through time but also between stakeholders and categories of costs. Some of these cost fluctuations and how the cost of BIM is currently financed are also highlighted in several case studies. The results show that the current distribution of costs set for traditional 2D delivery is hindering the potential success of BIM. There is also evidence that stakeholders who don’t use BIM may benefit financially from the BIM use of others and that collaborative BIM is significantly different to the use of ‘lonely’ BIM in terms of benefits and profitability.
Resumo:
Solar heating of potable water has traditionally been accomplished through the use of solar thermal (ST) collectors. With the recent increases in availability and lower cost of photovoltaic (PV) panels, the potential of coupling PV solar arrays to electrically heated domestic hot water (DHW) tanks has been considered. Additionally, innovations in the SDHW industry have led to the creation of photovoltaic/thermal (PV/T) collectors, which heat water using both electrical and thermal energy. The current work compared the performance and cost-effectiveness of a traditional solar thermal (ST) DHW system to PV-solar-electric DHW systems and a PV/T DHW system. To accomplish this, a detailed TRNSYS model of the solar hot water systems was created and annual simulations were performed for 250 L/day and 325 L/day loads in Toronto, Vancouver, Montreal, Halifax, and Calgary. It was shown that when considering thermal performance, PV-DHW systems were not competitive when compared to ST-DHW and PVT-DHW systems. As an example, for Toronto the simulated annual solar fractions of PV-DHW systems were approximately 30%, while the ST-DHW and PVT-DHW systems achieved 65% and 71% respectively. With current manufacturing and system costs, the PV-DHW system was the most cost-effective system for domestic purposes. The capital cost of the PV-DHW systems were approximately $1,923-$2,178 depending on the system configuration, and the ST-DHW and PVT system were estimated to have a capital cost of $2,288 and $2,373 respectively. Although the capital cost of the PVT-DHW system was higher than the other systems, a Present Worth analysis for a 20-year period showed that for a 250 L/day load in Toronto the Present Worth of the PV/T system was approximately $4,597, with PV-DHW systems costing approximately $7,683-$7,816 and the ST-DHW system costing $5,238.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this research was to estimate the cost-effectiveness of two rehabilitation interventions for breast cancer survivors, each compared to a population-based, non-intervention group (n = 208). The two services included an early home-based physiotherapy intervention (DAART, n = 36) and a group-based exercise and psychosocial intervention (STRETCH, n = 31). A societal perspective was taken and costs were included as those incurred by the health care system, the survivors and community. Health outcomes included: (a) 'rehabilitated cases' based on changes in health-related quality of life between 6 and 12 months post-diagnosis, using the Functional Assessment of Cancer Therapy - Breast Cancer plus Arm Morbidity (FACT-B+4) questionnaire, and (b) quality-adjusted life years (QALYs) using utility scores from the Subjective Health Estimation (SHE) scale. Data were collected using self-reported questionnaires, medical records and program budgets. A Monte-Carlo modelling approach was used to test for uncertainty in cost and outcome estimates. The proportion of rehabilitated cases was similar across the three groups. From a societal perspective compared with the non-intervention group, the DAART intervention appeared to be the most efficient option with an incremental cost of $1344 per QALY gained, whereas the incremental cost per QALY gained from the STRETCH program was $14,478. Both DAART and STRETCH are low-cost, low-technological health promoting programs representing excellent public health investments.
Resumo:
Substantial amounts of nitrogen (N) fertiliser are necessary for commercial sugarcane production because of the large biomass produced by sugarcane crops. Since this fertiliser is a substantial input cost and has implications if N is lost to the environment, there are pressing needs to optimise the supply of N to the crops' requirements. The complexity of the N cycle and the strong influence of climate, through its moderation of N transformation processes in the soil and its impact on N uptake by crops, make simulation-based approaches to this N management problem attractive. In this paper we describe the processes to be captured in modelling soil and plant N dynamics in sugarcane systems, and review the capability for modelling these processes. We then illustrate insights gained into improved management of N through simulation-based studies for the issues of crop residue management, irrigation management and greenhouse gas emissions. We conclude by identifying processes not currently represented in the models used for simulating N cycling in sugarcane production systems, and illustrate ways in which these can be partially overcome in the short term. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In biologically mega-diverse countries that are undergoing rapid human landscape transformation, it is important to understand and model the patterns of land cover change. This problem is particularly acute in Colombia, where lowland forests are being rapidly cleared for cropping and ranching. We apply a conceptual model with a nested set of a priori predictions to analyse the spatial and temporal patterns of land cover change for six 50-100 km(2) case study areas in lowland ecosystems of Colombia. Our analysis included soil fertility, a cost-distance function, and neighbourhood of forest and secondary vegetation cover as independent variables. Deforestation and forest regrowth are tested using logistic regression analysis and an information criterion approach to rank the models and predictor variables. The results show that: (a) overall the process of deforestation is better predicted by the full model containing all variables, while for regrowth the model containing only the auto-correlated neighbourhood terms is a better predictor; (b) overall consistent patterns emerge, although there are variations across regions and time; and (c) during the transformation process, both the order of importance and significance of the drivers change. Forest cover follows a consistent logistic decline pattern across regions, with introduced pastures being the major replacement land cover type. Forest stabilizes at 2-10% of the original cover, with an average patch size of 15.4 (+/- 9.2) ha. We discuss the implications of the observed patterns and rates of land cover change for conservation planning in countries with high rates of deforestation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.