5 resultados para allocation of fixed cost with normal capacity

em DRUM (Digital Repository at the University of Maryland)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A decision-maker, when faced with a limited and fixed budget to collect data in support of a multiple attribute selection decision, must decide how many samples to observe from each alternative and attribute. This allocation decision is of particular importance when the information gained leads to uncertain estimates of the attribute values as with sample data collected from observations such as measurements, experimental evaluations, or simulation runs. For example, when the U.S. Department of Homeland Security must decide upon a radiation detection system to acquire, a number of performance attributes are of interest and must be measured in order to characterize each of the considered systems. We identified and evaluated several approaches to incorporate the uncertainty in the attribute value estimates into a normative model for a multiple attribute selection decision. Assuming an additive multiple attribute value model, we demonstrated the idea of propagating the attribute value uncertainty and describing the decision values for each alternative as probability distributions. These distributions were used to select an alternative. With the goal of maximizing the probability of correct selection we developed and evaluated, under several different sets of assumptions, procedures to allocate the fixed experimental budget across the multiple attributes and alternatives. Through a series of simulation studies, we compared the performance of these allocation procedures to the simple, but common, allocation procedure that distributed the sample budget equally across the alternatives and attributes. We found the allocation procedures that were developed based on the inclusion of decision-maker knowledge, such as knowledge of the decision model, outperformed those that neglected such information. Beginning with general knowledge of the attribute values provided by Bayesian prior distributions, and updating this knowledge with each observed sample, the sequential allocation procedure performed particularly well. These observations demonstrate that managing projects focused on a selection decision so that the decision modeling and the experimental planning are done jointly, rather than in isolation, can improve the overall selection results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many major cities, fixed route transit systems such as bus and rail serve millions of trips per day. These systems have people collect at common locations (the station or stop), and board at common times (for example according to a predetermined schedule or headway). By using common service locations and times, these modes can consolidate many trips that have similar origins and destinations or overlapping routes. However, the routes are not sensitive to changing travel patterns, and have no way of identifying which trips are going unserved, or are poorly served, by the existing routes. On the opposite end of the spectrum, personal modes of transportation, such as a private vehicle or taxi, offer service to and from the exact origin and destination of a rider, at close to exactly the time they desire to travel. Despite the apparent increased convenience to users, the presence of a large number of small vehicles results in a disorganized, and potentially congested road network during high demand periods. The focus of the research presented in this paper is to develop a system that possesses both the on-demand nature of a personal mode, with the efficiency of shared modes. In this system, users submit their request for travel, but are asked to make small compromises in their origin and destination location by walking to a nearby meeting point, as well as slightly modifying their time of travel, in order to accommodate other passengers. Because the origin and destination location of the request can be adjusted, this is a more general case of the Dial-a-Ride problem with time windows. The solution methodology uses a graph clustering algorithm coupled with a greedy insertion technique. A case study is presented using actual requests for taxi trips in Washington DC, and shows a significant decrease in the number of vehicles required to serve the demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social anhedonia is a deficiency in the capacity to experience pleasure from social interactions. This study examined the implications of social anhedonia for romantic relationship functioning, including the association of social anhedonia with sentiments towards romantic partners that are central to relationship functioning (satisfaction, commitment, regard, and care), analogous perceptions of the partner’s sentiments, hostile behavior during relationship conflict, and perception of the partner’s hostile behavior. Data were collected from 281 participants who were involved in romantic relationships. Support was found for social anhedonia’s hypothesized negative association with satisfaction, regard, and care, as well as all four perceived partner sentiments. These associations were independent of attachment anxiety and avoidance. Additionally, attachment avoidance mediated social anhedonia’s relationship with commitment. However, no support was found for social anhedonia’s hypothesized positive association with actual and perceived partner hostile behavior. Results suggest that social anhedonia may undermine the functioning of interpersonal relationships.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with quantifying the resilience of a network of pavements. Calculations were carried out by modeling network performance under a set of possible damage-meteorological scenarios with known probability of occurrence. Resilience evaluation was performed a priori while accounting for optimal preparedness decisions and additional response actions that can be taken under each of the scenarios. Unlike the common assumption that the pre-event condition of all system components is uniform, fixed, and pristine, component condition evolution was incorporated herein. For this purpose, the health of the individual system components immediately prior to hazard event impact, under all considered scenarios, was associated with a serviceability rating. This rating was projected to reflect both natural deterioration and any intermittent improvements due to maintenance. The scheme was demonstrated for a hypothetical case study involving Laguardia Airport. Results show that resilience can be impacted by the condition of the infrastructure elements, their natural deterioration processes, and prevailing maintenance plans. The findings imply that, in general, upper bound values are reported in ordinary resilience work, and that including evolving component conditions is of value.