868 resultados para Cost Over run
Resumo:
Extensive numerical investigations are undertaken to analyze and compare, for the first time, the performance, techno-economy, and power consumption of three-level electrical Duobinary, optical Duobinary, and PAM-4 modulation formats as candidates for high-speed next-generation PONs supporting downstream 40 Gb/s per wavelength signal transmission over standard SMFs in C-band. Optimization of transceiver bandwidths are undertaken to show the feasibility of utilizing low-cost and band-limited components to support next-generation PON transmissions. The effect of electro-absorption modulator chirp is examined for electrical Duobinary and PAM-4. Electrical Duobinary and optical Duobinary are powerefficient schemes for smaller transmission distances of 10 km SMFs and optical Duobinary offers the best receiver sensitivity albeit with a relatively high transceiver cost. PAM-4 shows the best power budget and costefficiency for larger distances of around 20 km, although it consumes more power. Electrical Duobinary shows the best trade-off between performance, cost and power dissipation.
Resumo:
People depend on various sources of information when trying to verify their autobiographical memories. Yet recent research shows that people prefer to use cheap-and-easy verification strategies, even when these strategies are not reliable. We examined the robustness of this cheap strategy bias, with scenarios designed to encourage greater emphasis on source reliability. In three experiments, subjects described real (Experiments 1 and 2) or hypothetical (Experiment 3) autobiographical events, and proposed strategies they might use to verify their memories of those events. Subjects also rated the reliability, cost, and the likelihood that they would use each strategy. In line with previous work, we found that the preference for cheap information held when people described how they would verify childhood or recent memories (Experiment 1); personally-important or trivial memories (Experiment 2), and even when the consequences of relying on incorrect information could be significant (Experiment 3). Taken together, our findings fit with an account of source monitoring in which the tendency to trust one’s own autobiographical memories can discourage people from systematically testing or accepting strong disconfirmatory evidence.
Resumo:
Global warming16 has already begun. Climate change has become a self-propelling and self-reinforcing process as a result of the externality associated with greenhouse- gas (GHG) emissions. Although it is an externality related to humankind, according to a number of unique features we should distinguish it from other externalities. Climate change is a global phenomenon in its causes and consequences. The long-term and persistent impacts of climate change will likely continue over centuries without further anthropogenic mechanism. The preindustrial (equilibrium) level of GHG concentration in the atmosphere cannot be restored since it is irreversible, but if we do not stabilise the actual level of atmospheric concentration, the situation will become much worse than it is now. Assessing the impacts of climate change requires careful considerations because of the pervasive uncertainties and risks associated with it.
Resumo:
Az írás a globális értékláncok élén álló autóipari cégek világgazdasági válságra adott reakcióit foglalja össze. Megállapítja, hogy a válságnak messze nincs vége: az iparág globális átrendeződése folytatódik. A globális értékláncokba sikeresen betagozódott közép-európai autóipari klaszter ezeknek a folyamatoknak mindmáig nyertese volt. Számolni kell azonban azzal, hogy továbbra is sok a technológiai és a piaci bizonytalanság: az új szereplők belépése, új üzleti modellek elterjedése hosszabb távon felboríthatja a jelenlegi status quo-t, és veszélyeztetheti a hagyományos autóipari befektetőiket munkabér-alapú versenyképességgel megtartani próbáló közép- és kelet-európai országok pozícióit. Az autóipari működő tőkét fogadó közép-kelet-európai országok számára hosszabb távon veszélyt jelenthet az autóipari üzleti modellek átalakulása, a gyártás teljes kiszervezése komplex gyártási szolgáltatást vállaló cégekhez, mivel ez esetben az értéklánc vezető vállalatai bezárhatják a régióban működő gyártóbázisaikat. Az értékláncok élén álló globális cégek „menekülés a minőségbe” stratégiája helyi szinten is követhető, követendő, a működő tőkét fogadó országok versenyképessége kizárólag a helyi leányvállalatok állandó „feljebb lépésével” tartható fenn. ______ This paper summarizes lead firms’ reactions to crisis in global automotive value chains. The paper advances five theses. Author argues that crisis is not over yet, the global restructuring of the industry continues. Actors in the CEE automotive cluster have successfully become integrated into global value chains and have thereby been the winners of past restructuring processes. Nevertheless, technological and market uncertainties prevail: entry of new economic actors and the diffusion of new business models may, in the long run, disrupt the current status quo and jeopardise the world economic position of CEE countries that have been relying solely on their labour cost advantages to sustain direct investment inflows in their automotive industries. In the long run the automotive industries of Central and Eastern European (CEE) economies may become threatened by the transformation of the prevailing automotive business model, the outsourcing of manufacturing and related support activities to complex manufacturing services providers, which could lead to the closure of lead firms’ manufacturing facilities in CEE. Lead firms’ increased focus on high quality high value adding activities strategy can and should be followed by local subsidiaries through a continuous strive for upgrading.
Resumo:
The paper provides a systematic review on the cost-of-illness studies in an age-associated condition with high prevalence, benign prostatic hyperplasia (BPH), published in Medline between 2005 and 2015. Overall 11 studies were included, which were conducted in 8 countries. In the US, the annual direct medical costs per patient ranged from $255 to $5,729, while in Europe from €253 to €1,251. In 2008, in the UK total annual direct medical costs of BPH were £180.8 million at national level. In the US, overall costs of BPH management in the private sector were estimated at $3.9 billion annually, of which $500 million was attributable to productivity loss (year 1999). Due to demographic factors and possible surgical innovations in the field of urology, the costs of BPH are likely to increase in the future. Over the next decade the age of retirement is projected to rise, consequently, the indirect costs related to aging-associated conditions such as BPH are expected to soar. To promote the transparent and cost-effective management of BPH, development of rational clinical guidelines would be essential that may lead to significant improvement in quality of care as well as reduction in healthcare expenditure.
Resumo:
Table illustrates estimated College of Medicine technology costs over a 10-year period. Also includes an outline describing funding and needs for the development of a Medical Library collection.
Resumo:
Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.
Resumo:
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^
Resumo:
In the discussion - Indirect Cost Factors in Menu Pricing – by David V. Pavesic, Associate Professor, Hotel, Restaurant and Travel Administration at Georgia State University, Associate Professor Pavesic initially states: “Rational pricing methodologies have traditionally employed quantitative factors to mark up food and beverage or food and labor because these costs can be isolated and allocated to specific menu items. There are, however, a number of indirect costs that can influence the price charged because they provide added value to the customer or are affected by supply/demand factors. The author discusses these costs and factors that must be taken into account in pricing decisions. Professor Pavesic offers as a given that menu pricing should cover costs, return a profit, reflect a value for the customer, and in the long run, attract customers and market the establishment. “Prices that are too high will drive customers away, and prices that are too low will sacrifice profit,” Professor Pavesic puts it succinctly. To dovetail with this premise the author provides that although food costs measure markedly into menu pricing, other factors such as equipment utilization, popularity/demand, and marketing are but a few of the parenthetic factors also to be considered. “… there is no single method that can be used to mark up every item on any given restaurant menu. One must employ a combination of methodologies and theories,” says Professor Pavesic. “Therefore, when properly carried out, prices will reflect food cost percentages, individual and/or weighted contribution margins, price points, and desired check averages, as well as factors driven by intuition, competition, and demand.” Additionally, Professor Pavesic wants you to know that value, as opposed to maximizing revenue, should be a primary motivating factor when designing menu pricing. This philosophy does come with certain caveats, and he explains them to you. Generically speaking, Professor Pavesic says, “The market ultimately determines the price one can charge.” But, in fine-tuning that decree he further offers, “Lower prices do not automatically translate into value and bargain in the minds of the customers. Having the lowest prices in your market may not bring customers or profit. “Too often operators engage in price wars through discount promotions and find that profits fall and their image in the marketplace is lowered,” Professor Pavesic warns. In reference to intangibles that influence menu pricing, service is at the top of the list. Ambience, location, amenities, product [i.e. food] presentation, and price elasticity are discussed as well. Be aware of price-value perception; Professor Pavesic explains this concept to you. Professor Pavesic closes with a brief overview of a la carte pricing; its pros and cons.
Resumo:
Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^
Resumo:
BACKGROUND: Cuban Americans have a high prevalence of type 2 diabetes, placing them at risk for cardiovascular disease (CVD) and increased medical costs. Little is known regarding the lifestyle risk factors of CVD among Cuban Americans. This study investigated modifiable CVD risk factors of Cuban Americans with and without type 2 diabetes. METHODS: Sociodemographics, anthropometrics, blood pressure, physical activity, dietary intake, and biochemical parameters were collected and assessed for n=79 and n=80 Cuban Americans with and without type 2 diabetes. RESULTS: Fourteen percent with diabetes and 24 percent without diabetes engaged in the recommended level of physical activity. Over 90 percent had over the recommended intake of saturated fats. Thirty-five percent were former or current smokers. DISCUSSION: Cuban Americans had several lifestyle factors that are likely to increase the risk of CVD. Their dietary factors were associated with blood cholesterol and body weight, which has been shown to impact on medical expenses. These findings may be used for designing programs for the prevention of CVD as well as type 2 diabetes for Cuban Americans.
Resumo:
The main objective of this work was to enable the recognition of human gestures through the development of a computer program. The program created captures the gestures executed by the user through a camera attached to the computer and sends it to the robot command referring to the gesture. They were interpreted in total ve gestures made by human hand. The software (developed in C ++) widely used the computer vision concepts and open source library OpenCV that directly impact the overall e ciency of the control of mobile robots. The computer vision concepts take into account the use of lters to smooth/blur the image noise reduction, color space to better suit the developer's desktop as well as useful information for manipulating digital images. The OpenCV library was essential in creating the project because it was possible to use various functions/procedures for complete control lters, image borders, image area, the geometric center of borders, exchange of color spaces, convex hull and convexity defect, plus all the necessary means for the characterization of imaged features. During the development of the software was the appearance of several problems, as false positives (noise), underperforming the insertion of various lters with sizes oversized masks, as well as problems arising from the choice of color space for processing human skin tones. However, after the development of seven versions of the control software, it was possible to minimize the occurrence of false positives due to a better use of lters combined with a well-dimensioned mask size (tested at run time) all associated with a programming logic that has been perfected over the construction of the seven versions. After all the development is managed software that met the established requirements. After the completion of the control software, it was observed that the overall e ectiveness of the various programs, highlighting in particular the V programs: 84.75 %, with VI: 93.00 % and VII with: 94.67 % showed that the nal program performed well in interpreting gestures, proving that it was possible the mobile robot control through human gestures without the need for external accessories to give it a better mobility and cost savings for maintain such a system. The great merit of the program was to assist capacity in demystifying the man set/machine therefore uses an easy and intuitive interface for control of mobile robots. Another important feature observed is that to control the mobile robot is not necessary to be close to the same, as to control the equipment is necessary to receive only the address that the Robotino passes to the program via network or Wi-Fi.
Supporting Run-time Monitoring of UML-RT through Customizable Monitoring Configurations in PapyrusRT
Resumo:
Model Driven Engineering uses the principle that code can automatically be generated from software models which would potentially save time and cost of development. By this methodology, a systems structure and behaviour can be expressed in more abstract, high level terms without some of the accidental complexity that the use of a general purpose language can bring. Models are the actual implementation of the system unlike in traditional software development where models are often used for documentation purposes only. However once the code is generated from the model, testing and debugging activities tend to happen on the code level and the model is not updated. We believe that monitoring on the model level could potentially facilitate quality assurance activities as the errors are detected in the early phase of development. In this thesis, we create a Monitoring Configuration for an open source model driven engineering tool called PapyrusRT in Eclipse. We support the run-time monitoring of UML-RT elements with a tracing tool called LTTng. We annotate the model with monitoring information to be used by the code generator for adding tracepoint statements for the corresponding elements. We provide the option of a timing specification to discover latency errors on the model. We validate the results by creating and tracing real time models in PapyrusRT.
Resumo:
The paper empirically tests the relationship between earnings volatility and cost of debt with a sample of more than 77,000 Swedish limited companies over the period 2006 to 2013 observing more than 677,000 firm years. As called upon by many researchers recently that there is very limited evidence of the association between earnings volatility and cost of debt this paper contributes greatly to the existing literature of earnings quality and debt contracts, especially on the consequence of earnings quality in the debt market. Earnings volatility is a proxy used for earnings quality while cost of debt is a component of debt contract. After controlling for firms’ profitability, liquidity, solvency, cashflow volatility, accruals volatility, sales volatility, business risk, financial risk and size this paper studies the effect of earnings volatility measured by standard deviation of Earnings Before Interest, Taxes, Depreciation and Amortization (EBITDA) on Cost of Debt. Overall finding suggests that lenders in Sweden does take earnings volatility into consideration while determining cost of debt for borrowers. But a deeper analysis of various industries suggest earnings volatility is not consistently used by lenders across all the industries. Lenders in Sweden are rather more sensitive to borrowers’ financial risk across all the industries. It may also be stated that larger borrowers tend to secure loans at a lower interest rate, the results are consistent with majority of the industries. Swedish debt market appears to be well prepared for financial crises as the debt crisis seems to have no or little adverse effect borrowers’ cost of capital. This study is the only empirical evidence to study the association between earnings volatility and cost of debt. Prior indirect research suggests earnings volatility has a negative effect on cost debt (i.e. an increase in earnings volatility will increase firm’s cost of debt). Our direct evidence from the Swedish debt market is consistent for some industries including media, real estate activities, transportation & warehousing, and other consumer services.
Resumo:
The meteorological and chemical transport model WRF-Chem was implemented to forecast PM10 concentrations over Poland. WRF-Chem version 3.5 was configured with three one-way nested domains using the GFS meteorological data and the TNO MACC II emissions. The 48 hour forecasts were run for each day of the winter and summer period of 2014 and there is only a small decrease in model performance for winter with respect to forecast lead time. The model in general captures the variability in observed PM10 concentrations for most of the stations. However, for some locations and specific episodes, the model performance is poor and the results cannot yet be used by official authorities. We argue that a higher resolution sector-based emission data will be helpful for this analysis in connection with a focus on planetary boundary layer processes in WRF-Chem and their impact on the initial distribution of emissions on both time and space.