11 resultados para Hold-up problem

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This note presents a simple model for prediction of liquid hold-up in two-phase horizontal pipe flow for the stratified roll wave (St+RW) flow regime. Liquid hold-up data for horizontal two-phase pipe flow [1, 2, 3, 4, 5 and 6] exhibit a steady increase with liquid velocity and a more dramatic fall with increasing gas rate as shown by Hand et al. [7 and 8] for example. In addition the liquid hold-up is reported to show an additional variation with pipe diameter. Generally, if the initial liquid rate for the no-gas flow condition gives a liquid height below the pipe centre line, the flow patterns pass successively through the stratified (St), stratified ripple (St+R), stratified roll wave, film plus droplet (F+D) and finally the annular (A+D, A+RW, A+BTS) regimes as the gas rate is increased. Hand et al. [7 and 8] have given a detailed description of this progression in flow regime development and definitions of the patterns involved. Despite the fact that there are over one hundred models which have been developed to predict liquid hold-up, none have been shown to be universally useful, while only a handful have proven to be applicable to specific flow regimes [9, 10, 11 and 12]. One of the most intractable regimes to predict has been the stratified roll wave pattern where the liquid hold-up shows the most dramatic change with gas flow rate. It has been suggested that the momentum balance-type models, which give both hold-up and pressure drop prediction, can predict universally for all flow regimes but particularly in the case of the difficult stratified roll wave pattern. Donnelly [1] recently demonstrated that the momentum balance models experienced some difficulties in the prediction of this regime. Without going into lengthy details, these models differ in the assumed friction factor or shear stress on the surfaces within the pipe particularly at the liquid–gas interface. The Baker–Jardine model [13] when tested against the 0.0454 m i.d. data of Nguyen [2] exhibited a wide scatter for both liquid hold-up and pressure drop as shown in Fig. 1. The Andritsos–Hanratty model [14] gave better prediction of pressure drop but a wide scatter for liquid hold-up estimation (cf. Fig. 2) when tested against the 0.0935 m i.d. data of Hand [5]. The Spedding–Hand model [15], shown in Fig. 3 against the data of Hand [5], gave improved performance but was still unsatisfactory with the prediction of hold-up for stratified-type flows. The MARS model of Grolman [6] gave better prediction of hold-up (cf. Fig. 4) but deterioration in the estimation of pressure drop when tested against the data of Nguyen [2]. Thus no method is available that will accurately predict liquid hold-up across the whole range of flow patterns but particularly for the stratified plus roll wavy regime. The position is particularly unfortunate since the stratified-type regimes are perhaps the most predominant pattern found in multiphase lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article offers a replication for Britain of Brown and Heywood's analysis of the determinants of performance appraisal in Australia. Although there are some important limiting differences between our two datasets - the Australia Workplace Industrial Relations Survey (AWIRS) and the Workplace Employment Relations Survey (WERS) - we reach one central point of agreement and one intriguing shared insight. First, performance appraisal is negatively associated with tenure: where employers cannot rely on the carrot of deferred pay or the stick of dismissal to motivate workers, they will tend to rely more on monitoring, ceteris paribus. Second, employer monitoring and performance pay may be complementary. However, consonant with the disparate results from the wider literature, there is more modest agreement on the contribution of specific human resource management practices, and still less on the role of job control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hydrogels, materials that can absorb and retain large quantities of water, could revolutionise medicine. Our bodies contain up to 60% water, but hydrogels can hold up to 90%. It is this similarity to human tissue that has led researchers to examine if these materials could be used to improve the treatment of a range of medical conditions including heart disease and cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or ‘split’, into multiple sections. A lecture might be too large to fit into any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall ‘utilisation’ which is basically the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are based on a local search method that attempts to optimise the space utilisation by means of a ‘dynamic splitting’ strategy. The local moves are designed to improve utilisation and satisfy the other constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Perils of Moviegoing in America is a film history that examines the various physical and (perceived) moral dangers facing audiences during the first fifty years of film exhibition.

Chapter 1: “Conflagration”
As early as 1897, a major fire broke out at a film exhibition in San Francisco, with flames burning the projectionist and nearby audience members. From that point until the widespread adoption of safety stock in 1950, fires were a very common movie-going experience. Hundreds of audience members lost their lives in literally thousands of theatre fires, ranging from early nickelodeons to the movie palaces of the thirties and forties.

Chapter 2: “Thieves Among Us”
Bandits robbed movie theatres on hundreds of occasions from the early days of film exhibition through the end of the Great Depression. They held up ticket booths, and they dynamited theatre safes. They also shot theatre managers, ushers, and audience members, as a great many of the robberies occurred while movies were playing on the screens inside.

Chapter 3: “Bombs Away”
Bombings at movie theatres became common in small towns and large cities on literally hundreds of occasions from 1914 to the start of World War II. Some were incendiary bombs, and some were stench bombs; both could be fatal, whether due to explosions or to the trampling of panicked moviegoers

Chapter 4: “It’s Catching”
Widespread movie-going in the early 20th century provoked an outcry from numerous doctors and optometrists who believed that viewing films could do irreparable harm to the vision of audience members. Medical publications (including the Journal of the American Medical Association) published major studies on this perceived problem, which then filtered into popular-audience magazines and newspapers.

Chapter 5: “The Devil’s Apothecary Shops”
Sitting in the dark with complete strangers proved worrisome for many early filmgoers, who had good reason to be concerned. Darkness meant that prostitutes could easily work in the balconies of some movie theatres, as could “mashers” who molested female patrons (and sometimes children) after the lights were dimmed. That was all in addition to the various murderers who used the cover of darkness to commit their crimes at movie theatres.

Chapter 6: “Blue Sundays”
Blue laws were those regulations that prohibited businesses from operating on Sundays. Most communities across the US had such legislation on their books, which by the nickelodeon era were at odds with the thousands of filmgoers who went to the movies every Sunday. Theatre managers were often arrested, making newspaper headlines over and over again. Police sometimes even arrested entire film audiences as accomplices in the Blue Law violations.

Chapter 7: “Something for Nothing”
In an effort to bolster ticket sales, many movie theatres in the 1910s began to hold lotteries in which lucky audience members won cash prizes; by the time of the Great Depression, lotteries like “Bank Night” became a common aspect of the theatre-going enterprise. However, reception studies have generally overlooked the intense (and sometimes coordinated) efforts by police, politicians, and preachers to end this practice, which they viewed as illegal and immoral gambling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of self-healing in networks that are reconfigurable in the sense that they can change their topology during an attack. Our goal is to maintain connectivity in these networks, even in the presence of repeated adversarial node deletion, by carefully adding edges after each attack. We present a new algorithm, DASH, that provably ensures that: 1) the network stays connected even if an adversary deletes up to all nodes in the network; and 2) no node ever increases its degree by more than 2 log n, where n is the number of nodes initially in the network. DASH is fully distributed; adds new edges only among neighbors of deleted nodes; and has average latency and bandwidth costs that are at most logarithmic in n. DASH has these properties irrespective of the topology of the initial network, and is thus orthogonal and complementary to traditional topology- based approaches to defending against attack. We also prove lower-bounds showing that DASH is asymptotically optimal in terms of minimizing maximum degree increase over multiple attacks. Finally, we present empirical results on power-law graphs that show that DASH performs well in practice, and that it significantly outperforms naive algorithms in reducing maximum degree increase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms. 

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE:

To report determinants of outcomes and follow-up in a large Mexican pediatric cataract project.

SETTING:

Hospital Luis Sanchez Bulnes, Mexico City, Mexico.

METHODS:

Data were collected prospectively from a pediatric cataract surgery program at the Hospital Luis Sanchez Bulnes, implemented by Helen Keller International. Preoperative data included age, sex, baseline visual acuity, type of cataract, laterality, and presence of conditions such as amblyopia. Surgical data included vitrectomy, capsulotomy, complications, and use of intraocular lenses (IOLs). Postoperative data included final visual acuity, refraction, number of follow-up visits, and program support for follow-up.

RESULTS:

Of 574 eyes of 415 children (mean age 7.1 years +/- 4.7 [SD]), IOLs were placed in 416 (87%). At least 1 follow-up was attended by 408 patients (98.3%) (mean total follow-up 3.5 +/- 1.8 months); 40% of eyes achieved a final visual acuity of 6/18 or better. Children living farther from the hospital had fewer postoperative visits (P = .04), while children receiving program support had more visits (P = .001). Factors predictive of better acuity included receiving an IOL during surgery (P = .04) and provision of postoperative spectacles (P = .001). Predictive of worse acuity were amblyopia (P = .003), postoperative complications (P = .0001), unilateral surgery (P = .0075), and female sex (P = .045).

CONCLUSIONS:

The results underscore the importance of surgical training in reducing complications, early intervention before amblyopia (observed in 40% of patients) can develop, and vigorous treatment if amblyopia is present. The positive impact of program support on follow-up is encouraging, although direct financial support may pose a problem for sustainability. More work is needed to understand reasons for worse outcomes in girls.