887 resultados para network cost models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the last three decades, the Capital Asset Pricing Model (CAPM) has been a dominant model to calculate expected return. In early 1990% Fama and French (1992) developed the Fama and French Three Factor model by adding two additional factors to the CAPM. However even with these present models, it has been found that estimates of the expected return are not accurate (Elton, 1999; Fama &French, 1997). Botosan (1997) introduced a new approach to estimate the expected return. This approach employs an equity valuation model to calculate the internal rate of return (IRR) which is often called, 'implied cost of equity capital" as a proxy of the expected return. This approach has been gaining in popularity among researchers. A critical review of the literature will help inform hospitality researchers regarding the issue and encourage them to implement the new approach into their own studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of pavement layer moduli through the use of an artificial neural network is a new concept which provides a less strenuous strategy for backcalculation procedures. Artificial Neural Networks are biologically inspired models of the human nervous system. They are specifically designed to carry out a mapping characteristic. This study demonstrates how an artificial neural network uses non-destructive pavement test data in determining flexible pavement layer moduli. The input parameters include plate loadings, corresponding sensor deflections, temperature of pavement surface, pavement layer thicknesses and independently deduced pavement layer moduli.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of the global energy balance, the radiative energy exchanges between Sun, Earth and space are now accurately quantified from new satellite missions. Much less is known about the magnitude of the energy flows within the climate system and at the Earth surface, which cannot be directly measured by satellites. In addition to satellite observations, here we make extensive use of the growing number of surface observations to constrain the global energy balance not only from space, but also from the surface. We combine these observations with the latest modeling efforts performed for the 5th IPCC assessment report to infer best estimates for the global mean surface radiative components. Our analyses favor global mean downward surface solar and thermal radiation values near 185 and 342 Wm**-2, respectively, which are most compatible with surface observations. Combined with an estimated surface absorbed solar radiation and thermal emission of 161 Wm**-2 and 397 Wm**-2, respectively, this leaves 106 Wm**-2 of surface net radiation available for distribution amongst the non-radiative surface energy balance components. The climate models overestimate the downward solar and underestimate the downward thermal radiation, thereby simulating nevertheless an adequate global mean surface net radiation by error compensation. This also suggests that, globally, the simulated surface sensible and latent heat fluxes, around 20 and 85 Wm**-2 on average, state realistic values. The findings of this study are compiled into a new global energy balance diagram, which may be able to reconcile currently disputed inconsistencies between energy and water cycle estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Population aging in Brazil underscores the need to discuss the proper management of the budget allocated in health field, especially in the sectors of high complexity, where coexist costly procedures, limited resources and the need for cost containment. In the other hand, demand is growing in a way directly proportional to the increase in the number of elderly in country. Objective: In this way, this research had as main objective to analyze the costs resulting from the admission of elderly in intensive care units (ICU) and its associated factors. Methods: This is a cross-sectional study with a quantitative approach and featured as a descriptive and exploratory research. Data were collected from medical records of elderly hospitalized in ICU from a brazilian city called Natal-RN, between november first, 2013 and january, 31 of 2014. The variables collected relate to the socio demographic profile, morbidity framework and characterization of hospitalization. The dependent variable was categorized by quartile 75 in high and low expense of hospitalization and submitted to chi-square test with the independent variables of the survey. Associations with p value <0.20 in the bivariate analysis were submitted to the technique of multiple logistic regression. We opted for the construction of three regression models from the above algorithm: general regression model, composed by all 493 hospitalizations in the study, other made with 181 individuals admitted in health public system (SUS) and a third one related to 312 cases from private service in health area. Results: In the general regression model, the variables respiratory diseases, hospitalizations in the private system, disoriented patient and previous stroke were associated with greater probability of high spending in the ICU. In the other hand, in SUS kind of hospitalizations, this probability was associated with disoriented patient, 80 years old or more, sepsis and admission for clinical reason. In the cases from the private network health, the high expenditure was associated with respiratory disease, mechanical ventilation, hospitalization for clinical reason and disoriented patients. Conclusion: The increased expenditure on hospitalization of elderly in intensive care depends on the clinical conditions of individuals. This highlights the importance of avoiding hospitalizations due to diseases sensitive to primary care by health preventive actions and providing comprehensive care to the elderly. In addition, obtaining different explanatory models, according to kind hospital funding, demonstrates the importance of the organization in health services related to composition of costs of hospitalization among the elderly. Another question founded was the need that to improve the funding, we must use rationally the available resources by avoiding unnecessary hospitalizations of elderly people in the extremes of severity. On this kind of precarious funding, ICU hospitalization of elderly non-critical or in a terminal state can compromise the quality of services provided to those who really need intensive care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Funded by European Union's Horizon 2020 Marie Sklodowska-Curie. Grant Number: 661211 Research Foundation Flanders (FWO). Grant Numbers: G.0055.08, G.0149.09, G.0308.13 FWO Research Network on Eco-Evolutionary dynamics French Ministère de l'Energie de l'Ecologie du Développement Durable et de la Mer through the EU FP6 BiodivERsA Eranet NERC. Grant Number: NE/J008001/1

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High street optometric practices are for-profit businesses. They mostly provide sight testing and eye examination services and sell optical products, such as spectacles and contact lenses. The sight testing services are often sold at a vastly reduced price and profits are generated primarily through high margin spectacle sales, in a loss leading strategy. Published literature highlights weaknesses in this strategy as it forms a barrier to widening the scope of services provided within optometric practices. This includes specialist non-refraction based services, such as shared care. In addition this business strategy discourages investment in advanced diagnostic equipment and higher professional qualifications. The aim of this thesis was to develop a greater understanding of the traditional loss-leading strategy. The thesis also aimed to assess the plausibility of alternative business models to support the development of specialist non-refraction services within high street optometric practice. This research was based on a single independent optometric practice that specialises in advanced retinal imaging and offers a broad range of shared care services. Specialist non-refraction based services were found to be poor generators of spectacle sales likely due to patient needs and presenting concerns. Alternative business strategies to support these services included charging more realistic professional fees via cost-based pricing and monthly payment plans. These strategies enabled specialist services to be more self-sustainable with less reliance on cross-subsidy from spectacle sales. Furthermore, improving operational efficiency can increase stand-alone profits for specialist services.Practice managers may be reluctant to increase professional fees due to market pressures and confidence. However, this thesis found that patients were accepting of increased professional fees. Practice managers can implement alternative business models to enhance eye care provision in high street optometric practices. These alternative business models also improve revenues and profits generated via clinical services and improve patient loyalty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in genomic technology, both in the increased speed and reduced cost of sequencing, have expanded the appreciation of the abundance of human genetic variation. However the sheer amount of variation, as well as the varying type and genomic content of variation, poses a challenge in understanding the clinical consequence of a single mutation. This work uses several methodologies to interpret the observed variation in the human genome, and presents novel strategies for the prediction of allele pathogenicity.

Using the zebrafish model system as an in vivo assay of allele function, we identified a novel driver of Bardet-Biedl Syndrome (BBS) in CEP76. A combination of targeted sequencing of 785 cilia-associated genes in a cohort of BBS patients and subsequent in vivo functional assays recapitulating the human phenotype gave strong evidence for the role of CEP76 mutations in the pathology of an affected family. This portion of the work demonstrated the necessity of functional testing in validating disease-associated mutations, and added to the catalogue of known BBS disease genes.

Further study into the role of copy-number variations (CNVs) in a cohort of BBS patients showed the significant contribution of CNVs to disease pathology. Using high-density array comparative genomic hybridization (aCGH) we were able to identify pathogenic CNVs as small as several hundred bp. Dissection of constituent gene and in vivo experiments investigating epistatic interactions between affected genes allowed for an appreciation of several paradigms by which CNVs can contribute to disease. This study revealed that the contribution of CNVs to disease in BBS patients is much higher than previously expected, and demonstrated the necessity of consideration of CNV contribution in future (and retrospective) investigations of human genetic disease.

Finally, we used a combination of comparative genomics and in vivo complementation assays to identify second-site compensatory modification of pathogenic alleles. These pathogenic alleles, which are found compensated in other species (termed compensated pathogenic deviations [CPDs]), represent a significant fraction (from 3 – 10%) of human disease-associated alleles. In silico pathogenicity prediction algorithms, a valuable method of allele prioritization, often misrepresent these alleles as benign, leading to omission of possibly informative variants in studies of human genetic disease. We created a mathematical model that was able to predict CPDs and putative compensatory sites, and functionally showed in vivo that second-site mutation can mitigate the pathogenicity of disease alleles. Additionally, we made publically available an in silico module for the prediction of CPDs and modifier sites.

These studies have advanced the ability to interpret the pathogenicity of multiple types of human variation, as well as made available tools for others to do so as well.