11 resultados para Cost Allocation Methods
em Digital Commons at Florida International University
Resumo:
Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^
Resumo:
This dissertation discussed resource allocation mechanisms in several network topologies including infrastructure wireless network, non-infrastructure wireless network and wire-cum-wireless network. Different networks may have different resource constrains. Based on actual technologies and implementation models, utility function, game theory and a modern control algorithm have been introduced to balance power, bandwidth and customers' satisfaction in the system. ^ In infrastructure wireless networks, utility function was used in the Third Generation (3G) cellular network and the network was trying to maximize the total utility. In this dissertation, revenue maximization was set as an objective. Compared with the previous work on utility maximization, it is more practical to implement revenue maximization by the cellular network operators. The pricing strategies were studied and the algorithms were given to find the optimal price combination of power and rate to maximize the profit without degrading the Quality of Service (QoS) performance. ^ In non-infrastructure wireless networks, power capacity is limited by the small size of the nodes. In such a network, nodes need to transmit traffic not only for themselves but also for their neighbors, so power management become the most important issue for the network overall performance. Our innovative routing algorithm based on utility function, sets up a flexible framework for different users with different concerns in the same network. This algorithm allows users to make trade offs between multiple resource parameters. Its flexibility makes it a suitable solution for the large scale non-infrastructure network. This dissertation also covers non-cooperation problems. Through combining game theory and utility function, equilibrium points could be found among rational users which can enhance the cooperation in the network. ^ Finally, a wire-cum-wireless network architecture was introduced. This network architecture can support multiple services over multiple networks with smart resource allocation methods. Although a SONET-to-WiMAX case was used for the analysis, the mathematic procedure and resource allocation scheme could be universal solutions for all infrastructure, non-infrastructure and combined networks. ^
Resumo:
In this paper, a heterogeneous network composed of femtocells deployed within a macrocell network is considered, and a quality-of-service (QoS)-oriented fairness metric which captures important characteristics of tiered network architectures is proposed. Using homogeneous Poisson processes, the sum capacities in such networks are expressed in closed form for co-channel, dedicated channel, and hybrid resource allocation methods. Then a resource splitting strategy that simultaneously considers capacity maximization, fairness constraints, and QoS constraints is proposed. Detailed computer simulations utilizing 3GPP simulation assumptions show that a hybrid allocation strategy with a well-designed resource split ratio enjoys the best cell-edge user performance, with minimal degradation in the sum throughput of macrocell users when compared with that of co-channel operation.
Resumo:
Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. ^ However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.^
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.
Resumo:
The span of control is the most discussed single concept in classical and modern management theory. In specifying conditions for organizational effectiveness, the span of control has generally been regarded as a critical factor. Existing research work has focused mainly on qualitative methods to analyze this concept, for example heuristic rules based on experiences and/or intuition. This research takes a quantitative approach to this problem and formulates it as a binary integer model, which is used as a tool to study the organizational design issue. This model considers a range of requirements affecting management and supervision of a given set of jobs in a company. These decision variables include allocation of jobs to workers, considering complexity and compatibility of each job with respect to workers, and the requirement of management for planning, execution, training, and control activities in a hierarchical organization. The objective of the model is minimal operations cost, which is the sum of supervision costs at each level of the hierarchy, and the costs of workers assigned to jobs. The model is intended for application in the make-to-order industries as a design tool. It could also be applied to make-to-stock companies as an evaluation tool, to assess the optimality of their current organizational structure. Extensive experiments were conducted to validate the model, to study its behavior, and to evaluate the impact of changing parameters with practical problems. This research proposes a meta-heuristic approach to solving large-size problems, based on the concept of greedy algorithms and the Meta-RaPS algorithm. The proposed heuristic was evaluated with two measures of performance: solution quality and computational speed. The quality is assessed by comparing the obtained objective function value to the one achieved by the optimal solution. The computational efficiency is assessed by comparing the computer time used by the proposed heuristic to the time taken by a commercial software system. Test results show the proposed heuristic procedure generates good solutions in a time-efficient manner.
Resumo:
Growth, morphology and biomass allocation in response to water depth was studied in white water lily,Nymphaea odorata Aiton. Plants were grown for 13 months in 30, 60 and 90 cm water in outdoor mesocosms in southern Florida. Water lily plant growth was distinctly seasonal with plants at all water levels producing more and larger leaves and more flowers in the warmer months. Plants in 30 cm water produced more but smaller and shorter-lived leaves than plants at 60 cm and 90 cm water levels. Although plants did not differ significantly in total biomass at harvest, plants in deeper water had significantly greater biomass allocated to leaves and roots, while plants in 30 cm water had significantly greater biomass allocated to rhizomes. Although lamina area and petiole length increased significantly with water level, lamina specific weight did not differ among water levels. Petiole specific weight increased significantly with increasing water level, implying a greater cost to tethering the larger laminae in deeper water. Lamina length and width scaled similarly at different water levels and modeled lamina area (LA) accurately (LAmodeled = 0.98LAmeasured + 3.96, R2 = 0.99). Lamina area was highly correlated with lamina weight (LW = 8.43LA − 66.78, R2 = 0.93), so simple linear measurements can predict water lily lamina area and lamina weight. These relationships were used to calculate monthly lamina surface area in the mesocosms. Plants in 30 cm water had lower total photosynthetic surface area than plants in 60 cm and 90 cm water levels throughout, and in the summer plants in 90 cm water showed a great increase in photosynthetic surface area as compared to plants in shallower water. These results support setting Everglades restoration water depth targets for sloughs at depths ≥45 cm and suggest that in the summer optimal growth for white water lilies occurs at depths ≥75 cm.
Resumo:
Fossil fuels constitute a significant fraction of the world's energy demand. The burning of fossil fuels emits huge amounts of carbon dioxide into the atmosphere. Therefore, the limited availability of fossil fuel resources and the environmental impact of their use require a change to alternative energy sources or carriers (such as hydrogen) in the foreseeable future. The development of methods to mitigate carbon dioxide emission into the atmosphere is equally important. Hence, extensive research has been carried out on the development of cost-effective technologies for carbon dioxide capture and techniques to establish hydrogen economy. Hydrogen is a clean energy fuel with a very high specific energy content of about 120MJ/kg and an energy density of 10Wh/kg. However, its potential is limited by the lack of environment-friendly production methods and a suitable storage medium. Conventional hydrogen production methods such as Steam-methane-reformation and Coal-gasification were modified by the inclusion of NaOH. The modified methods are thermodynamically more favorable and can be regarded as near-zero emission production routes. Further, suitable catalysts were employed to accelerate the proposed NaOH-assisted reactions and a relation between reaction yield and catalyst size has been established. A 1:1:1 molar mixture of LiAlH 4, NaNH2 and MgH2 were investigated as a potential hydrogen storage medium. The hydrogen desorption mechanism was explored using in-situ XRD and Raman Spectroscopy. Mesoporous metal oxides were assessed for CO2 capture at both power and non-power sectors. A 96.96% of mesoporous MgO (325 mesh size, surface area = 95.08 ± 1.5 m2/g) was converted to MgCO 3 at 350°C and 10 bars CO2. But the absorption capacity of 1h ball milled zinc oxide was low, 0.198 gCO2 /gZnO at 75°C and 10 bars CO2. Interestingly, 57% mass conversion of Fe and Fe 3O4 mixture to FeCO3 was observed at 200°C and 10 bars CO2. MgO, ZnO and Fe3O4 could be completely regenerated at 550°C, 250°C and 350°C respectively. Furthermore, the possible retrofit of MgO and a mixture of Fe and Fe3O 4 to a 300 MWe coal-fired power plant and iron making industry were also evaluated.
Resumo:
BACKGROUND: Cuban Americans have a high prevalence of type 2 diabetes, placing them at risk for cardiovascular disease (CVD) and increased medical costs. Little is known regarding the lifestyle risk factors of CVD among Cuban Americans. This study investigated modifiable CVD risk factors of Cuban Americans with and without type 2 diabetes. METHODS: Sociodemographics, anthropometrics, blood pressure, physical activity, dietary intake, and biochemical parameters were collected and assessed for n=79 and n=80 Cuban Americans with and without type 2 diabetes. RESULTS: Fourteen percent with diabetes and 24 percent without diabetes engaged in the recommended level of physical activity. Over 90 percent had over the recommended intake of saturated fats. Thirty-five percent were former or current smokers. DISCUSSION: Cuban Americans had several lifestyle factors that are likely to increase the risk of CVD. Their dietary factors were associated with blood cholesterol and body weight, which has been shown to impact on medical expenses. These findings may be used for designing programs for the prevention of CVD as well as type 2 diabetes for Cuban Americans.