868 resultados para Cost Over run
Resumo:
We investigate the Student-t process as an alternative to the Gaussian process as a non-parametric prior over functions. We derive closed form expressions for the marginal likelihood and predictive distribution of a Student-t process, by integrating away an inverse Wishart process prior over the co-variance kernel of a Gaussian process model. We show surprising equivalences between different hierarchical Gaussian process models leading to Student-t processes, and derive a new sampling scheme for the inverse Wishart process, which helps elucidate these equivalences. Overall, we show that a Student-t process can retain the attractive properties of a Gaussian process - a nonparamet-ric representation, analytic marginal and predictive distributions, and easy model selection through covariance kernels - but has enhanced flexibility, and predictive covariances that, unlike a Gaussian process, explicitly depend on the values of training observations. We verify empirically that a Student-t process is especially useful in situations where there are changes in covariance structure, or in applications such as Bayesian optimization, where accurate predictive covariances are critical for good performance. These advantages come at no additional computational cost over Gaussian processes.
Resumo:
Recent advances in processor speeds, mobile communications and battery life have enabled computers to evolve from completely wired to completely mobile. In the most extreme case, all nodes are mobile and communication takes place at available opportunities – using both traditional communication infrastructure as well as the mobility of intermediate nodes. These are mobile opportunistic networks. Data communication in such networks is a difficult problem, because of the dynamic underlying topology, the scarcity of network resources and the lack of global information. Establishing end-to-end routes in such networks is usually not feasible. Instead a store-and-carry forwarding paradigm is better suited for such networks. This dissertation describes and analyzes algorithms for forwarding of messages in such networks. In order to design effective forwarding algorithms for mobile opportunistic networks, we start by first building an understanding of the set of all paths between nodes, which represent the available opportunities for any forwarding algorithm. Relying on real measurements, we enumerate paths between nodes and uncover what we refer to as the path explosion effect. The term path explosion refers to the fact that the number of paths between a randomly selected pair of nodes increases exponentially with time. We draw from the theory of epidemics to model and explain the path explosion effect. This is the first contribution of the thesis, and is a key observation that underlies subsequent results. Our second contribution is the study of forwarding algorithms. For this, we rely on trace driven simulations of different algorithms that span a range of design dimensions. We compare the performance (success rate and average delay) of these algorithms. We make the surprising observation that most algorithms we consider have roughly similar performance. We explain this result in light of the path explosion phenomenon. While the performance of most algorithms we studied was roughly the same, these algorithms differed in terms of cost. This prompted us to focus on designing algorithms with the explicit intent of reducing costs. For this, we cast the problem of forwarding as an optimal stopping problem. Our third main contribution is the design of strategies based on optimal stopping principles which we refer to as Delegation schemes. Our analysis shows that using a delegation scheme reduces cost over naive forwarding by a factor of O(√N), where N is the number of nodes in the network. We further validate this result on real traces, where the cost reduction observed is even greater. Our results so far include a key assumption, which is unbounded buffers on nodes. Next, we relax this assumption, so that the problem shifts to one of prioritization of messages for transmission and dropping. Our fourth contribution is the study of message prioritization schemes, combined with forwarding. Our main result is that one achieves higher performance by assigning higher priorities to young messages in the network. We again interpret this result in light of the path explosion effect.
Resumo:
Fibre-Reinforced Plastics (FRPs) have been used in civil aerospace vehicles for decades. The current state-of-the-art in airframe design and manufacture results in approximately half the airframe mass attributable to FRP materials. The continual increase in the use of FRP materials over metallic alloys is attributable to the material's superior specific strength and stiffness, fatigue performance and corrosion resistance. However, the full potential of these materials has yet to be exploited as analysis methods to predict physical failure with equal accuracy and robustness are not yet available. The result is a conservative approach to design, but one that can bring benefit via increased inspection intervals and reduced cost over the vehicle life. The challenge is that the methods used in practice are based on empirical tests and real relationships and drivers are difficult to see in this complex process and so the trade-off decision is challenging and uncertain. The aim of this feasibility study was to scope a viable process which could help develop some rules and relationships based on the fundamental mechanics of composite material and the economics of production and operation, which would enhance understanding of the role and impact of design allowables across the life of a composite structure.
Resumo:
As the emphasis on initiatives that can improve environmental efficiency while simultaneously maintaining economic viability has escalated in recent years, attention has turned to more radical concepts of operation. In particular, the cruiser–feeder concept has shown potential for a new generation, environmentally friendly, air-transport system to alleviate the growing pressure on the passenger air-transportation network. However, a full evaluation of realizable benefits is needed to determine how the design and operation of potential feeder-aircraft configurations impact on the feasibility of the overall concept. This paper presents an analysis of a cruiser–feeder concept, in which fuel is transferred between the feeder and the cruiser in an aerial-refueling configuration to extend range while reducing cruiser weight, compared against the effects of escalating existing technology levels while retaining the existing passenger levels. Up to 14% fuel-burn and 12% operating-cost savings can be achieved when compared to a similar technology-level aircraft concept without aerial refueling, representing up to 26% in fuel burn and 25% in total operating cost over the existing operational model at today’s standard fleet technology and performance. However, these potential savings are not uniformly distributed across the network, and the system is highly sensitive to the routes serviced, with reductions in revenue-generation potential observed across the network for aerial-refueling operations due to reductions in passenger revenue.
Resumo:
Traditional inventory models focus on risk-neutral decision makers, i.e., characterizing replenishment strategies that maximize expected total profit, or equivalently, minimize expected total cost over a planning horizon. In this paper, we propose a framework for incorporating risk aversion in multi-period inventory models as well as multi-period models that coordinate inventory and pricing strategies. In each case, we characterize the optimal policy for various measures of risk that have been commonly used in the finance literature. In particular, we show that the structure of the optimal policy for a decision maker with exponential utility functions is almost identical to the structure of the optimal risk-neutral inventory (and pricing) policies. Computational results demonstrate the importance of this approach not only to risk-averse decision makers, but also to risk-neutral decision makers with limited information on the demand distribution.
Resumo:
Background A whole-genome genotyping array has previously been developed for Malus using SNP data from 28 Malus genotypes. This array offers the prospect of high throughput genotyping and linkage map development for any given Malus progeny. To test the applicability of the array for mapping in diverse Malus genotypes, we applied the array to the construction of a SNPbased linkage map of an apple rootstock progeny. Results Of the 7,867 Malus SNP markers on the array, 1,823 (23.2 %) were heterozygous in one of the two parents of the progeny, 1,007 (12.8 %) were heterozygous in both parental genotypes, whilst just 2.8 % of the 921 Pyrus SNPs were heterozygous. A linkage map spanning 1,282.2 cM was produced comprising 2,272 SNP markers, 306 SSR markers and the S-locus. The length of the M432 linkage map was increased by 52.7 cM with the addition of the SNP markers, whilst marker density increased from 3.8 cM/marker to 0.5 cM/marker. Just three regions in excess of 10 cM remain where no markers were mapped. We compared the positions of the mapped SNP markers on the M432 map with their predicted positions on the ‘Golden Delicious’ genome sequence. A total of 311 markers (13.7 % of all mapped markers) mapped to positions that conflicted with their predicted positions on the ‘Golden Delicious’ pseudo-chromosomes, indicating the presence of paralogous genomic regions or misassignments of genome sequence contigs during the assembly and anchoring of the genome sequence. Conclusions We incorporated data for the 2,272 SNP markers onto the map of the M432 progeny and have presented the most complete and saturated map of the full 17 linkage groups of M. pumila to date. The data were generated rapidly in a high-throughput semi-automated pipeline, permitting significant savings in time and cost over linkage map construction using microsatellites. The application of the array will permit linkage maps to be developed for QTL analyses in a cost-effective manner, and the identification of SNPs that have been assigned erroneous positions on the ‘Golden Delicious’ reference sequence will assist in the continued improvement of the genome sequence assembly for that variety.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
Pós-graduação em Agronomia (Produção Vegetal) - FCAV
Resumo:
Maine implemented a hospital rate-setting program in 1984 at approximately the same time as Medicare started the Prospective Payment System (PPS). This study examines the effectiveness of the program in controlling cost over the period 1984-1989. Hospital costs in Maine are compared to costs in 36 non rate-setting states and 11 other rate-setting states. Changes in cost per equivalent admission, adjusted patient day, per capita, admissions, and length of stay are described and analyzed using multivariate techniques. A number of supply and demand variables which were expected to influence costs independently of rate-setting were controlled for in the study. Results indicate the program was effective in containing costs measured in terms of cost per adjusted patient day. However, this was not true for the other two cost variables. The average length of stay increased during the period in Maine hospitals indicating an association with rate-setting. Several supply variables, especially the number of beds per 1,000 population were strongly associated with the cost and use of hospitals. ^
Resumo:
Aircraft Operators Companies (AOCs) are always willing to keep the cost of a flight as low as possible. These costs could be modelled using a function of the fuel consumption, time of flight and fixed cost (over flight cost, maintenance, etc.). These are strongly dependant on the atmospheric conditions, the presence of winds and the aircraft performance. For this reason, much research effort is being put in the development of numerical and graphical techniques for defining the optimal trajectory. This paper presents a different approach to accommodate AOCs preferences, adding value to their activities, through the development of a tool, called aircraft trajectory simulator. This tool is able to simulate the actual flight of an aircraft with the constraints imposed. The simulator is based on a point mass model of the aircraft. The aim of this paper is to evaluate 3DoF aircraft model errors with BADA data through real data from Flight Data Recorder FDR. Therefore, to validate the proposed simulation tool a comparative analysis of the state variables vector is made between an actual flight and the same flight using the simulator. Finally, an example of a cruise phase is presented, where a conventional levelled flight is compared with a continuous climb flight. The comparison results show the potential benefits of following user-preferred routes for commercial flights.
Resumo:
Óleo de soja epoxidado (OSE) é um produto químico há muito tempo utilizado como co-estabilizante e plastificante secundário do poli (cloreto de vinila) (PVC), ou seja, como um material que tem limitações na quantidade máxima que pode ser usada no composto de PVC. A sua aplicação como plastificante primário, ou seja, como o principal elemento plastificante no composto de PVC, e como base para outros plastificantes de fontes renováveis, tem aumentado nos últimos anos, principalmente devido a melhorias de desempenho e à redução do custo do OSE em comparação com plastificantes tradicionais. A reação de epoxidação do óleo de soja é bem conhecida e ocorre em duas fases líquidas, com reações em ambas as fases, e transferência de massa entre as fases. O processo industrial mais utilizado conta com formação in-situ do ácido perfórmico, através da adição gradativa do principal reagente, o peróxido de hidrogênio a uma mistura agitada de ácido fórmico e óleo de soja refinado. Industrialmente, o processo é realizado em batelada, controlando a adição do reagente peróxido de hidrogênio de forma que a geração de calor não ultrapasse a capacidade de resfriamento do sistema. O processo tem um ciclo que pode variar entre 8 e 12 horas para atingir a conversão desejada, fazendo com que a capacidade de produção seja dependente de investimentos relativamente pesados em reatores agitados mecanicamente, que apresentam diversos riscos de segurança. Estudos anteriores não exploram em profundidade algumas potenciais áreas de otimização e redução das limitações dos processos, como a intensificação da transferência de calor, que permite a redução do tempo total de reação. Este trabalho avalia experimentalmente e propõe uma modelagem para a reação de epoxidação do óleo de soja em condições de remoção de calor máxima, o que permite que os reagentes sejam adicionados em sua totalidade no início da reação, simplificando o processo. Um modelo foi ajustado aos dados experimentais. O coeficiente de troca térmica, cuja estimativa teórica pode incorrer em erros significativos, foi calculado a partir de dados empíricos e incluído na modelagem, acrescentando um fator de variabilidade importante em relação aos modelos anteriores. O estudo propõe uma base teórica para potenciais alternativas aos processos adotados atualmente, buscando entender as condições necessárias e viáveis em escala industrial para redução do ciclo da reação, podendo inclusive apoiar potenciais estudos de implementação de um reator contínuo, mais eficiente e seguro, para esse processo.
Resumo:
The Florida Everglades is a highly diverse socionatural landscape that historically spanned much of the south Florida peninsula. Today, the Florida Everglades is an iconic but highly contested conservation landscape. It is the site of one of the world's largest publicly funded ecological restoration programs, estimated to cost over $8 billion (U.S. GAO 2007), and it is home to over two million acres of federally protected lands, including the Big Cypress National Preserve and Everglades National Park. However, local people's values, practices and histories overlap and often conflict with the global and eco-centric values linked to Everglades environmental conservation efforts, sparking environmental conflict. My dissertation research examined the cultural politics of nature associated with two Everglades conservation and ecological restoration projects: 1) the creation and stewardship of the Big Cypress National Preserve, and 2) the Tamiami Trail project at the northern boundary of Everglades National Park. Using multiple research methods including ethnographic fieldwork, archival research, participant observation, surveys and semi-structured interviews, I documented how these two projects have shaped environmental claims-making strategies to Everglades nature on the part of environmental NGOs, the National Park Service and local white outdoorsmen. In particular, I examined the emergence of an oppositional white identity called the Gladesmen Culture. My findings include the following: 1) just as different forms of nature are historically produced, contingent and power-laden, so too are different claims to Everglades nature; 2) identity politics are an integral dimension of Everglades environmental conflicts; and 3) the Big Cypress region's history and contemporary conflicts are shaped by the broader political economy of development in south Florida. My dissertation concluded that identity politics, class and property relations have played a key, although not always obvious, role in shaping Everglades history and environmental claims-making, and that they continue to influence contemporary Everglades environmental conflicts.
Resumo:
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
The attention on green building is driven by the desire to reduce a building’s running cost over its entire life cycle. However, with the use of sustainable technologies and more environmentally friendly products in the building sector, the construction industry contributes significantly to sustainable actions of our society. Different certification systems have entered the market with the aim to measure a building’s sustainability. However, each system uses its own set of criteria for the purpose of rating. The primary goal of this study is to identify a comprehensive set of criteria for the measurement of building sustainability, and therefore to facilitate the comparison of existing rating methods. The collection and analysis of the criteria, identified through a comprehensive literature review, has led to the establishment of two additional categories besides the 3 pillars of sustainability. The comparative analyses presented in this thesis reveal strengths and weaknesses of the chosen green building certification systems - LEED, BREEAM, and DGNB.