228 resultados para Pooling


Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Lower limb amputation is often associated with a high risk of early post-operative mortality. Mortality rates are also increasingly being put forward as a possible benchmark for surgical performance. The primary aim of this systematic review is to investigate early post-operative mortality following a major lower limb amputation in population/regional based studies, and reported factors that might influence these mortality outcomes. METHODS: Embase, PubMed, Cinahl and Psycinfo were searched for publications in any language on 30 day or in hospital mortality after major lower limb amputation in population/regional based studies. PRISMA guidelines were followed. A self developed checklist was used to assess quality and susceptibility to bias. Summary data were extracted for the percentage of the population who died; pooling of quantitative results was not possible because of methodological differences between studies. RESULTS: Of the 9,082 publications identified, results were included from 21. The percentage of the population undergoing amputation who died within 30 days ranged from 7% to 22%, the in hospital equivalent was 4-20%. Transfemoral amputation and older age were found to have a higher proportion of early post-operative mortality, compared with transtibial and younger age, respectively. Other patient factors or surgical treatment choices related to increased early post-operative mortality varied between studies. CONCLUSIONS: Early post-operative mortality rates vary from 4% to 22%. There are very limited data presented for patient related factors (age, comorbidities) that influence mortality. Even less is known about factors related to surgical treatment choices, being limited to amputation level. More information is needed to allow comparison across studies or for any benchmarking of acceptable mortality rates. Agreement is needed on key factors to be reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been a recent spate of high profile infrastructure cost overruns in Australia and internationally. This is just the tip of a longer-term and more deeply-seated problem with initial budget estimating practice, well recognised in both academic research and industry reviews: the problem of uncertainty. A case study of the Sydney Opera House is used to identify and illustrate the key causal factors and system dynamics of cost overruns. It is conventionally the role of risk management to deal with such uncertainty, but the type and extent of the uncertainty involved in complex projects is shown to render established risk management techniques ineffective. This paper considers a radical advance on current budget estimating practice which involves a particular approach to statistical modelling complemented by explicit training in estimating practice. The statistical modelling approach combines the probability management techniques of Savage, which operate on actual distributions of values rather than flawed representations of distributions, and the data pooling technique of Skitmore, where the size of the reference set is optimised. Estimating training employs particular calibration development methods pioneered by Hubbard, which reduce the bias of experts caused by over-confidence and improve the consistency of subjective decision-making. A new framework for initial budget estimating practice is developed based on the combined statistical and training methods, with each technique being explained and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of Internet, video over IP is gaining popularity. In such an environment, scalability and fault tolerance will be the key issues. Existing video on demand (VoD) service systems are usually neither scalable nor tolerant to server faults and hence fail to comply to multi-user, failure-prone networks such as the Internet. Current research areas concerning VoD often focus on increasing the throughput and reliability of single server, but rarely addresses the smooth provision of service during server as well as network failures. Reliable Server Pooling (RSerPool), being capable of providing high availability by using multiple redundant servers as single source point, can be a solution to overcome the above failures. During a possible server failure, the continuity of service is retained by another server. In order to achieve transparent failover, efficient state sharing is an important requirement. In this paper, we present an elegant, simple, efficient and scalable approach which has been developed to facilitate the transfer of state by the client itself, using extended cookie mechanism, which ensures that there is no noticeable change in disruption or the video quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a network in which several service providers offer wireless access to their respective subscribed customers through potentially multihop routes. If providers cooperate by jointly deploying and pooling their resources, such as spectrum and infrastructure (e.g., base stations) and agree to serve each others' customers, their aggregate payoffs, and individual shares, may substantially increase through opportunistic utilization of resources. The potential of such cooperation can, however, be realized only if each provider intelligently determines with whom it would cooperate, when it would cooperate, and how it would deploy and share its resources during such cooperation. Also, developing a rational basis for sharing the aggregate payoffs is imperative for the stability of the coalitions. We model such cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition, deployment, and allocation of the channels and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings, i.e., if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each a share that removes any incentive to split from the coalition. The optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time by respectively solving the primals and the duals of the above optimizations, using distributed computations and limited exchange of confidential information among the providers. Numerical evaluations reveal that cooperation substantially enhances individual providers' payoffs under the optimal cooperation strategy and several different payoff sharing rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigated within- and between-reader precision in estimating age for northern offshore spotted dolphins and possible effects on precision from the sex and age-class of specimens. Age was estimated from patterns of growth layer groups i n the dentine and cementum of the dolphins' teeth. Each specimen was aged at least three times by each of two persons. Two data samples were studied. The first comprised 800 of each sex from animals collected during 1973-78. The second included 45 females collected during 1981. There were significant, generally downward trends through time in the estimates from multiple readings of the 1973-78 data. These trends were slight, and age distributions from last readings and mean estimates per specimen appeared to be homogeneous. The largest factor affecting precision in the 1973-78 data set was between-reader variation. In light of the relatively high within-reader precision (trends considered), the consistent between-reader differences suggest a problem of accuracy rather than precision for this series. Within-reader coefficients of variation averaged approximately 7% and 11%. Pooling the data resulted i n an average coefficient of variation near 16%. Within- and between-reader precision were higher for the 1981 sample, and the data homogeneous over both factors. CVs averaged near 5% and 6% for the two readers. These results point to further refinements in reading the 1981 series. Properties of the 1981 sample may be partly responsible for greater precision: by chance there were proportionately fewer older dolphins included, and preparation and selection criteria were probably more stringent. (PDF contains 35 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Age-based analyses were used to demonstrate consistent differences in growth between populations of Acanthochromis polyacanthus (Pomacentridae) collected at three distance strata across the continental shelf (inner, mid-, and outer shelf) of the central Great Barrier Reef (three reefs per distance stratum). Fish had significantly greater maximum lengths with increasing distance from shore, but fish from all distances reached approximately the same maximum age, indicating that growth is more rapid for fish found on outer-shelf reefs. Only one fish collected from inner-shelf reefs reached >100 mm SL, whereas 38−67% of fish collected from the outer shelf were >100 mm SL. The largest age class of adult-size fish collected from inner and mid-shelf locations comprised 3−4 year-olds, but shifted to 2-year-olds on outer-shelf reefs. Mortality schedules (Z and S) were similar irrespective of shelf position (inner shelf: 0.51 and 60.0%; mid-shelf: 0.48 and 61.8%; outer shelf: 0.43 and 65.1%, respectively). Age validation of captive fish indicated that growth increments are deposited annually, between the end of winter and early spring. The observed cross-shelf patterns in adult sizes and growth were unlikely to be a result of genetic differences between sample populations because all fish collected showed the same color pattern. It is likely that cross-shelf variation in quality and quantity of food, as well as in turbidity, are factors that contribute to the observed patterns of growth. Similar patterns of cross-shelf mortality indicate that predation rates varied little across the shelf. Our study cautions against pooling demographic parameters on broad spatial scales without consideration of the potential for cross-shelf variabil

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Slopes and intercepts of length-weight relationships obtained from 37 populations from the rivers Oti, Pru and Black Volta in Ghana were compared using a one way analysis of covariance with fixed effects. Although no significant differences were obtained from this analysis, an ANOVA comparing the magnitudes of mean condition factors (Wx100/SL3) found 9 out of 37 populations significantly different at the 0.05 level. A two-way nested ANOVA using all populations combined, however, did not yield any significant differences between the three rivers. Thus, pooling the data to obtain the results presented in Part I (see Entsua-Mensah et al. Naga 1995) is justified here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotope (SI) values of carbon (δ13C) and nitrogen (δ15N) are useful for determining the trophic connectivity between species within an ecosystem, but interpretation of these data involves important assumptions about sources of intrapopulation variability. We compared intrapopulation variability in δ13C and δ15N for an estuarine omnivore, Spotted Seatrout (Cynoscion nebulosus), to test assumptions and assess the utility of SI analysis for delineation of the connectivity of this species with other species in estuarine food webs. Both δ13C and δ15N values showed patterns of enrichment in fish caught from coastal to offshore sites and as a function of fish size. Results for δ13C were consistent in liver and muscle tissue, but liver δ15N showed a negative bias when compared with muscle that increased with absolute δ15N value. Natural variability in both isotopes was 5–10 times higher than that observed in laboratory populations, indicating that environmentally driven intrapopulation variability is detectable particularly after individual bias is removed through sample pooling. These results corroborate the utility of SI analysis for examination of the position of Spotted Seatrout in an estuarine food web. On the basis of these results, we conclude that interpretation of SI data in fishes should account for measurable and ecologically relevant intrapopulation variability for each species and system on a case by case basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Do hospitals experience safety tipping points as utilization increases, and if so, what are the implications for hospital operations management? We argue that safety tipping points occur when managerial escalation policies are exhausted and workload variability buffers are depleted. Front-line clinical staff is forced to ration resources and, at the same time, becomes more error prone as a result of elevated stress hormone levels. We confirm the existence of safety tipping points for in-hospital mortality using the discharge records of 82,280 patients across six high-mortality-risk conditions from 256 clinical departments of 83 German hospitals. Focusing on survival during the first seven days following admission, we estimate a mortality tipping point at an occupancy level of 92.5%. Among the 17% of patients in our sample who experienced occupancy above the tipping point during the first seven days of their hospital stay, high occupancy accounted for one in seven deaths. The existence of a safety tipping point has important implications for hospital management. First, flexible capacity expansion is more cost-effective for safety improvement than rigid capacity, because it will only be used when occupancy reaches the tipping point. In the context of our sample, flexible staffing saves more than 40% of the cost of a fully staffed capacity expansion, while achieving the same reduction in mortality. Second, reducing the variability of demand by pooling capacity in hospital clusters can greatly increase safety in a hospital system, because it reduces the likelihood that a patient will experience occupancy levels beyond the tipping point. Pooling the capacity of nearby hospitals in our sample reduces the number of deaths due to high occupancy by 34%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How does the laminar organization of cortical circuitry in areas VI and V2 give rise to 3D percepts of stratification, transparency, and neon color spreading in response to 2D pictures and 3D scenes? Psychophysical experiments have shown that such 3D percepts are sensitive to whether contiguous image regions have the same relative contrast polarity (dark-light or lightdark), yet long-range perceptual grouping is known to pool over opposite contrast polarities. The ocularity of contiguous regions is also critical for neon color spreading: Having different ocularity despite the contrast relationship that favors neon spreading blocks the spread. In addition, half visible points in a stereogram can induce near-depth transparency if the contrast relationship favors transparency in the half visible areas. It thus seems critical to have the whole contrast relationship in a monocular configuration, since splitting it between two stereogram images cancels the effect. What adaptive functions of perceptual grouping enable it to both preserve sensitivity to monocular contrast and also to pool over opposite contrasts? Aspects of cortical development, grouping, attention, perceptual learning, stereopsis and 3D planar surface perception have previously been analyzed using a 3D LAMINART model of cortical areas VI, V2, and V4. The present work consistently extends this model to show how like-polarity competition between VI simple cells in layer 4 may be combined with other LAMINART grouping mechanisms, such as cooperative pooling of opposite polarities at layer 2/3 complex cells. The model also explains how the Metelli Rules can lead to transparent percepts, how bistable transparency percepts can arise in which either surface can be perceived as transparent, and how such a transparency reversal can be facilitated by an attention shift. The like-polarity inhibition prediction is consistent with lateral masking experiments in which two f1anking Gabor patches with the same contrast polarity as the target increase the target detection threshold when they approach the target. It is also consistent with LAMINART simulations of cortical development. Other model explanations and testable predictions will also be presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How do human observers perceive a coherent pattern of motion from a disparate set of local motion measures? Our research has examined how ambiguous motion signals along straight contours are spatially integrated to obtain a globally coherent perception of motion. Observers viewed displays containing a large number of apertures, with each aperture containing one or more contours whose orientations and velocities could be independently specified. The total pattern of the contour trajectories across the individual apertures was manipulated to produce globally coherent motions, such as rotations, expansions, or translations. For displays containing only straight contours extending to the circumferences of the apertures, observers' reports of global motion direction were biased whenever the sampling of contour orientations was asymmetric relative to the direction of motion. Performance was improved by the presence of identifiable features, such as line ends or crossings, whose trajectories could be tracked over time. The reports of our observers were consistent with a pooling process involving a vector average of measures of the component of velocity normal to contour orientation, rather than with the predictions of the intersection-of-constraints analysis in velocity space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this review, we discuss recent work by the ENIGMA Consortium (http://enigma.ini.usc.edu) - a global alliance of over 500 scientists spread across 200 institutions in 35 countries collectively analyzing brain imaging, clinical, and genetic data. Initially formed to detect genetic influences on brain measures, ENIGMA has grown to over 30 working groups studying 12 major brain diseases by pooling and comparing brain data. In some of the largest neuroimaging studies to date - of schizophrenia and major depression - ENIGMA has found replicable disease effects on the brain that are consistent worldwide, as well as factors that modulate disease effects. In partnership with other consortia including ADNI, CHARGE, IMAGEN and others(1), ENIGMA's genomic screens - now numbering over 30,000 MRI scans - have revealed at least 8 genetic loci that affect brain volumes. Downstream of gene findings, ENIGMA has revealed how these individual variants - and genetic variants in general - may affect both the brain and risk for a range of diseases. The ENIGMA consortium is discovering factors that consistently affect brain structure and function that will serve as future predictors linking individual brain scans and genomic data. It is generating vast pools of normative data on brain measures - from tens of thousands of people - that may help detect deviations from normal development or aging in specific groups of subjects. We discuss challenges and opportunities in applying these predictors to individual subjects and new cohorts, as well as lessons we have learned in ENIGMA's efforts so far.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When viewing two superimposed, translating sets of dots moving in different directions, one overestimates the direction difference. This phenomenon of direction repulsion is thought to be driven by inhibitory interactions between directionally tuned motion detectors [1, 2]. However, there is disagreement on where this occurs — at early stages of motion processing [1, 3], or at the later, global motion-processing stage following “pooling” of these measures [4–6]. These two stages of motion pro - cessing have been identified as occurring in area V1 and the human homolog of macaque MT/V5, respectively[7, 8]. We designed experiments in which local and global predictions of repulsion are pitted against one another. Our stimuli contained a target set of dots, moving at a uniform speed, superimposed on a “mixed-speed” distractor set. Because the perceived speed of a mixed-speed stimulus is equal to the dots’ average speed [9], a global-processing account of direction repulsion predicts that repulsion magnitude induced by a mixed-speed distractor will be indistinguishable from that induced by a single-speed distractor moving at the same mean speed. This is exactly what we found. These results provide compelling evidence that global-motion interactions play a major role in driving direction repulsion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The processing of motion information by the visual system can be decomposed into two general stages; point-by-point local motion extraction, followed by global motion extraction through the pooling of the local motion signals. The direction aftereVect (DAE) is a well known phenomenon in which prior adaptation to a unidirectional moving pattern results in an exaggerated perceived direction diVerence between the adapted direction and a subsequently viewed stimulus moving in a diVerent direction. The experiments in this paper sought to identify where the adaptation underlying the DAE occurs within the motion processing hierarchy. We found that the DAE exhibits interocular transfer, thus demonstrating that the underlying adapted neural mechanisms are binocularly driven and must, therefore, reside in the visual cortex. The remaining experiments measured the speed tuning of the DAE, and used the derived function to test a number of local and global models of the phenomenon. Our data provide compelling evidence that the DAE is driven by the adaptation of motion-sensitive neurons at the local-processing stage of motion encoding. This is in contrast to earlier research showing that direction repulsion, which can be viewed as a simultaneous presentation counterpart to the DAE, is a global motion process. This leads us to conclude that the DAE and direction repulsion reflect interactions between motion-sensitive neural mechanisms at different levels of the motion-processing hierarchy.