955 resultados para Dynamic Marginal Cost
Resumo:
Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.
Resumo:
In Finland one of the most important current issues in the environmental management is the quality of surface waters. The increasing social importance of lakes and water systems has generated wide-ranging interest in lake restoration and management, concerning especially lakes suffering from eutrophication, but also from other environmental impacts. Most of the factors deteriorating the water quality in Finnish lakes are connected to human activities. Especially since the 1940's, the intensified farming practices and conduction of sewage waters from scattered settlements, cottages and industry have affected the lakes, which simultaneously have developed in to recreational areas for a growing number of people. Therefore, this study was focused on small lakes, which are human impacted, located close to settlement areas and have a significant value for local population. The aim of this thesis was to obtain information from lake sediment records for on-going lake restoration activities and to prove that a well planned, properly focused lake sediment study is an essential part of the work related to evaluation, target consideration and restoration of Finnish lakes. Altogether 11 lakes were studied. The study of Lake Kaljasjärvi was related to the gradual eutrophication of the lake. In lakes Ormajärvi, Suolijärvi, Lehee, Pyhäjärvi and Iso-Roine the main focus was on sediment mapping, as well as on the long term changes of the sedimentation, which were compared to Lake Pääjärvi. In Lake Hormajärvi the role of different kind of sedimentation environments in the eutrophication development of the lake's two basins were compared. Lake Orijärvi has not been eutrophied, but the ore exploitation and related acid main drainage from the catchment area have influenced the lake drastically and the changes caused by metal load were investigated. The twin lakes Etujärvi and Takajärvi are slightly eutrophied, but also suffer problems associated with the erosion of the substantial peat accumulations covering the fringe areas of the lakes. These peat accumulations are related to Holocene water level changes, which were investigated. The methods used were chosen case-specifically for each lake. In general, acoustic soundings of the lakes, detailed description of the nature of the sediment and determinations of the physical properties of the sediment, such as water content, loss on ignition and magnetic susceptibility were used, as was grain size analysis. A wide set of chemical analyses was also used. Diatom and chrysophycean cyst analyses were applied, and the diatom inferred total phosphorus content was reconstructed. The results of these studies prove, that the ideal lake sediment study, as a part of a lake management project, should be two-phased. In the first phase, thoroughgoing mapping of sedimentation patterns should be carried out by soundings and adequate corings. The actual sampling, based on the preliminary results, must include at least one long core from the main sedimentation basin for the determining the natural background state of the lake. The recent, artificially impacted development of the lake can then be determined by short-core and surface sediment studies. The sampling must be focused on the basis of the sediment mapping again, and it should represent all different sedimentation environments and bottom dynamic zones, considering the inlets and outlets, as well as the effects of possible point loaders of the lake. In practice, the budget of the lake management projects of is usually limited and only the most essential work and analyses can be carried out. The set of chemical and biological analyses and dating methods must therefore been thoroughly considered and adapted to the specific management problem. The results show also, that information obtained from a properly performed sediment study enhances the planning of the restoration, makes possible to define the target of the remediation activities and improves the cost-efficiency of the project.
Resumo:
The prevalence of resistance to phosphine in the rust-red flour beetle, Tribolium castaneum, from eastern Australia was investigated, as well as the potential fitness cost of this type of resistance. Discriminating dose tests on 115 population samples collected from farms from 2006 to 2010 showed that populations containing insects with the weakly resistant phenotype are common in eastern Australia (65.2 of samples), although the frequency of resistant phenotypes within samples was typically low (median of 2.3). The population cage approach was used to investigate the possibility that carrying the alleles for weak resistance incurs a fitness cost. Hybridized populations were initiated using a resistant strain and either of two different susceptible strains. There was no evidence of a fitness cost based on the frequency of susceptible phenotypes in hybridized populations that were reared for seven generations without exposure to phosphine. This suggests that resistant alleles will tend to persist in field populations that have undergone selection even if selection pressure is removed. The prevalence of resistance is a warning that this species has been subject to considerable selection pressure and that effective resistance management practices are needed to address this problem. The resistance prevalence data also provide a basis against which to measure management success.
Resumo:
The built environment is a major contributor to the world’s carbon dioxide emissions, with a considerable amount of energy being consumed in buildings due to heating, ventilation and air-conditioning, space illumination, use of electrical appliances, etc., to facilitate various anthropogenic activities. The development of sustainable buildings seeks to ameliorate this situation mainly by reducing energy consumption. Sustainable building design, however, is a complicated process involving a large number of design variables, each with a range of feasible values. There are also multiple, often conflicting, objectives involved such as the life cycle costs and occupant satisfaction. One approach to dealing with this is through the use of optimization models. In this paper, a new multi-objective optimization model is developed for sustainable building design by considering the design objectives of cost and energy consumption minimization and occupant comfort level maximization. In a case study demonstration, it is shown that the model can derive a set of suitable design solutions in terms of life cycle cost, energy consumption and indoor environmental quality so as to help the client and design team gain a better understanding of the design space and trade-off patterns between different design objectives. The model can very useful in the conceptual design stages to determine appropriate operational settings to achieve the optimal building performance in terms of minimizing energy consumption and maximizing occupant comfort level.
Resumo:
A mechanics based linear analysis of the problem of dynamic instabilities in slender space launch vehicles is undertaken. The flexible body dynamics of the moving vehicle is studied in an inertial frame of reference, including velocity induced curvature effects, which have not been considered so far in the published literature. Coupling among the rigid-body modes, the longitudinal vibrational modes and the transverse vibrational modes due to asymmetric lifting-body cross-section are considered. The model also incorporates the effects of aerodynamic forces and the propulsive thrust of the vehicle. The effects of the coupling between the combustion process (mass variation, developed thrust etc.) and the variables involved in the flexible body dynamics (displacements and velocities) are clearly brought out. The model is one-dimensional, and it can be employed to idealised slender vehicles with complex shapes. Computer simulations are carried out using a standard eigenvalue problem within h-p finite element modelling framework. Stability regimes for a vehicle subjected to propulsive thrust are validated by comparing the results from published literature. Numerical simulations are carried out for a representative vehicle to determine the instability regimes with vehicle speed and propulsive thrust as the parameters. The phenomena of static instability (divergence) and dynamic instability (flutter) are observed. The results at low Mach number match closely with the results obtained from previous models published in the literature.
Resumo:
Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous many-to-many information dissemination. Event systems are widely used, because asynchronous messaging provides a flexible alternative to RPC (Remote Procedure Call). They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. The filters are organized into a routing table in order to forward incoming events to proper subscribers and neighbouring routers. This thesis addresses the optimization of content-based routing tables organized using the covering relation and presents novel data structures and configurations for improving local and distributed operation. Data structures are needed for organizing filters into a routing table that supports efficient matching and runtime operation. We present novel results on dynamic filter merging and the integration of filter merging with content-based routing tables. In addition, the thesis examines the cost of client mobility using different protocols and routing topologies. We also present a new matching technique called temporal subspace matching. The technique combines two new features. The first feature, temporal operation, supports notifications, or content profiles, that persist in time. The second feature, subspace matching, allows more expressive semantics, because notifications may contain intervals and be defined as subspaces of the content space. We also present an application of temporal subspace matching pertaining to metadata-based continuous collection and object tracking.
Resumo:
The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.
Resumo:
A non-linear model, construed as a generalized version of the models put forth earlier for the study of bi-state social interaction processes, is proposed in this study. The feasibility of deriving the dynamics of such processes is demonstrated by establishing equivalence between the non-linear model and a higher order linear model.
Resumo:
The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications
Resumo:
Background The objective is to estimate the incremental cost-effectiveness of the Australian National Hand Hygiene Inititiave implemented between 2009 and 2012 using healthcare associated Staphylococcus aureus bacteraemia as the outcome. Baseline comparators are the eight existing state and territory hand hygiene programmes. The setting is the Australian public healthcare system and 1,294,656 admissions from the 50 largest Australian hospitals are included. Methods The design is a cost-effectiveness modelling study using a before and after quasi-experimental design. The primary outcome is cost per life year saved from reduced cases of healthcare associated Staphylococcus aureus bacteraemia, with cost estimated by the annual on-going maintenance costs less the costs saved from fewer infections. Data were harvested from existing sources or were collected prospectively and the time horizon for the model was 12 months, 2011–2012. Findings No useable pre-implementation Staphylococcus aureus bacteraemia data were made available from the 11 study hospitals in Victoria or the single hospital in Northern Territory leaving 38 hospitals among six states and territories available for cost-effectiveness analyses. Total annual costs increased by $2,851,475 for a return of 96 years of life giving an incremental cost-effectiveness ratio (ICER) of $29,700 per life year gained. Probabilistic sensitivity analysis revealed a 100% chance the initiative was cost effective in the Australian Capital Territory and Queensland, with ICERs of $1,030 and $8,988 respectively. There was an 81% chance it was cost effective in New South Wales with an ICER of $33,353, a 26% chance for South Australia with an ICER of $64,729 and a 1% chance for Tasmania and Western Australia. The 12 hospitals in Victoria and the Northern Territory incur annual on-going maintenance costs of $1.51M; no information was available to describe cost savings or health benefits. Conclusions The Australian National Hand Hygiene Initiative was cost-effective against an Australian threshold of $42,000 per life year gained. The return on investment varied among the states and territories of Australia.
Resumo:
Low level strategic supplements constitute one of the few options for northern beef producers to increase breeder productivity and profitability. Objectives of the project were to improve the cost-effectiveness of using such supplements and to improve supplement delivery systems. Urea-based supplements fed during the dry season can substantially reduce breeder liveweight loss and increase fertility during severe dry seasons. Also when fed during the late wet season these supplements increased breeder body liveweight and increased fertility of breeders in low body condition. Intake of dry lick supplements fed free choice is apparently determined primarily by the palatability of supplements relative to pasture, and training of cattle appears to be of limited importance. Siting of supplementation points has some effect on supplement intake, but little effect on grazing behaviour. Economic analysis of supplementation (urea, phosphorus or molasses) and weaning strategies was based on the relative efficacy of these strategies to maintain breeder body condition late in the dry season. Adequate body condition of breeders at this time of the year is needed to avoid mortality from under-nutrition and achieve satisfactory fertility of breeders during the following wet season. Supplements were highly cost-effective when they reduced mortality, but economic returns were generally low if the only benefit was increased fertility.
Resumo:
Digital image
Resumo:
Accounting information systems (AIS) capture and process accounting data and provide valuable information for decision-makers. However, in a rapidly changing environment, continual management of the AIS is necessary for organizations to optimise performance outcomes. We suggest that building a dynamic AIS capability enables accounting process and organizational performance. Using the dynamic capabilities framework (Teece 2007) we propose that a dynamic AIS capability can be developed through the synergy of three competencies: a flexible AIS, having a complementary business intelligence system and accounting professionals with IT technical competency. Using survey data, we find evidence of a positive association between a dynamic AIS capability, accounting process performance, and overall firm performance. The results suggest that developing a dynamic AIS resource can add value to an organization. This study provides guidance for organizations looking to leverage the performance outcomes of their AIS environment.
Resumo:
The INFORMAS food prices module proposes a step-wise framework to measure the cost and affordability of population diets. The price differential and the tax component of healthy and less healthy foods, food groups, meals and diets will be benchmarked and monitored over time. Results can be used to model or assess the impact of fiscal policies, such as ‘fat taxes’ or subsidies. Key methodological challenges include: defining healthy and less healthy foods, meals, diets and commonly consumed items; including costs of alcohol, takeaways, convenience foods and time; selecting the price metric; sampling frameworks; and standardizing collection and analysis protocols. The minimal approach uses three complementary methods to measure the price differential between pairs of healthy and less healthy foods. Specific challenges include choosing policy relevant pairs and defining an anchor for the lists. The expanded approach measures the cost of a healthy diet compared to the current (less healthy) diet for a reference household. It requires dietary principles to guide the development of the healthy diet pricing instrument and sufficient information about the population’s current intake to inform the current (less healthy) diet tool. The optimal approach includes measures of affordability and requires a standardised measure of household income that can be used for different countries. The feasibility of implementing the protocol in different countries is being tested in New Zealand, Australia and Fiji. The impact of different decision points to address challenges will be investigated in a systematic manner. We will present early insights and results from this work.