190 resultados para Slope efficiencies
Resumo:
This paper aims to present preliminary findings on measuring the technical efficiencies using Data Envelopment Analysis (DEA) in Malaysian Real Estate Investment Trusts (REITs) to determine the best practice for operations which include the asset allocation and scale size to improve the performance of Malaysian REITs. Variables identified as input and output will be assessed in this cross section analysis using the operational approach and Variable Return to Scale DEA (VRS-DEA) by focusing on Malaysian REITs for the year 2013. Islamic REITs have higher efficiency score as compared to the conventional REITs for both models. Diversified REITs are more efficient as compared to the specialised REIT using both models. For Model 1, the negative inefficient value is identified in the managerial inefficiency as compared to the scale inefficiency. This shows that inputs are not fully minimised to produce more outputs. However, when other expenses are considered as different input variables, the efficiency score becomes higher from 60.3% to 81.2%. In model 2, scale inefficiency produce greater inefficiency as compared to the managerial efficiency. The result suggests that Malaysian REITs have been operating at the wrong scale of operations as majority of the Malaysian REITs are operating at decreasing return to scale.
Resumo:
Agility is an essential part of many athletic activities. Currently, agility drill duration is the sole criterion used for evaluation of agility performance. The relationship between drill duration and factors such as acceleration, deceleration and change of direction, however, has not been fully explored. This paper provides a mathematical description of the relationship between velocity and radius of curvatures in an agility drill through implementation of a power law (PL). Two groups of skilled and unskilled participants performed a cyclic forward/backward shuttle agility test. Kinematic data was recorded using motion capture system at a sampling rate of 200 Hz. The logarithmic relationship between tangential velocity and radius of curvature of participant trajectories in both groups was established using the PL. The slope of the regression line was found to be 0.26 and 0.36, for the skilled and unskilled groups, respectively. The magnitudes of regression line slope for both groups were approximately 0.3 which is close to the expected 1/3 value. Results are an indication of how the PL could be implemented in an agility drill thus opening the way for establishment of a more representative measure of agility performance instead of drill duration.
Resumo:
With new national targets for patient flow in public hospitals designed to increase efficiencies in patient care and resource use, better knowledge of events affecting length of stay will support improved bed management and scheduling of procedures. This paper presents a case study involving the integration of material from each of three databases in operation at one tertiary hospital and demonstrates it is possible to follow patient journeys from admission to discharge. What is known about this topic? At present, patient data at one Queensland tertiary hospital are assembled in three information systems: (1) the Hospital Based Corporate Information System (HBCIS), which tracks patients from in-patient admission to discharge; (2) the Emergency Department Information System (EDIS) containing patient data from presentation to departure from the emergency department; and (3) Operation Room Management Information System (ORMIS), which records surgical operations. What does this paper add? This paper describes how a new enquiry tool may be used to link the three hospital information systems for studying the hospital journey through different wards and/or operating theatres for both individual and groups of patients. What are the implications for practitioners? An understanding of the patients’ journeys provides better insight into patient flow and provides the tool for research relating to access block, as well as optimising the use of physical and human resources.
Resumo:
This work explored the applicability of electrocoagulation (EC) using aluminium electrodes for the removal of contaminants which can scale and foul reverse osmosis membranes from a coal seam (CS) water sample, predominantly comprising sodium chloride, and sodium bicarbonate. In general, the removal efficiency of species responsible for scaling and fouling was enhanced by increasing the applied current density/voltage and contact times (30–60 s) in the EC chamber. High removal efficiencies of species potentially responsible for scale formation in reverse osmosis units such as calcium (100%), magnesium (87.9%), strontium (99.3%), barium (100%) and silicates (98.3%) were achieved. Boron was more difficult to eliminate (13.3%) and this was postulated to be due to the elevated solution pH. Similarly, fluoride removal from solution (44%) was also inhibited by the presence of hydroxide ions in the pH range 9–10. Analysis of produced flocs suggested the dominant presence of relatively amorphous boehmite (AlOOH), albeit the formation of Al(OH)3 was not ruled out as the drying process employed may have converted aluminium hydroxide to aluminium oxyhydroxide species. Evidence for adsorption of contaminants on floc surface sites was determined from FTIR studies. The quantity of aluminium released during the electrocoagulation process was higher than the Faradaic amount which suggested that the high salt concentrations in the coal seam water had chemically reacted with the aluminium electrodes.
Resumo:
Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.
Resumo:
We report herein highly efficient photocatalysts comprising supported nanoparticles (NPs) of gold (Au) and palladium (Pd) alloys, which utilize visible light to catalyse the Suzuki cross-coupling reactions at ambient temperature. The alloy NPs strongly absorb visible light, energizing the conduction electrons of NPs which produce highly energetic electrons at the surface sites. The surface of the energized NPs activates the substrates and these particles exhibit good activity on a range of typical Suzuki reaction combinations. The photocatalytic efficiencies strongly depend on the Au:Pd ratio of the alloy NPs, irradiation light intensity and wavelength. The results show that the alloy nanoparticles efficiently couple thermal and photonic energy sources to drive Suzuki reactions. Results of the density functional theory (DFT) calculations indicate that transfer of the light-excited electrons from the nanoparticle surface to the reactant molecules adsorbed on the nanoparticle surface activates the reactants. The knowledge acquired in this study may inspire further studies of new efficient photocatalysts and a wide range of organic syntheses driven by sunlight.
Resumo:
A set of packed micro paddy lysimeters, placed in a greenhouse, was used to simulate the dissipation of two herbicides, simetryn and thiobencarb, in a controlled environment. Data from a field monitoring study in 2003, including the soil condition and water balances, were used in the simulation. The herbicides were applied and monitored over a period of 21 d. The water balances under two water management scenarios, intermittent irrigation management (AI) and continuous irrigation management (CI), were simulated. In the AI scenario, the pattern of herbicide dissipation in the surface water of the field were simulated, following the first-order kinetics. In the CI scenario, similarity was observed in most lysimeter and field concentrations, but there were differences in some data points. Dissipation curves of both herbicides in the surface water of the two simulated scenarios were not significantly different (P > 0.05) from the field data except for intercept of the thiobencarb curve in the CI scenario. The distribution of simetryn and thiobencarb in the soil profile after simulation were also similar to the field data. The highest concentrations of both herbicides were found on the topsoil layer at 0-2.5 cm depth. Only a small amount of herbicides moved down to the deeper soil layers. Micro paddy lysimeters are thus a good alternative for the dissipation study of pesticides in the paddy environment.
Resumo:
With the introduction of the PCEHR (Personally Controlled Electronic Health Record), the Australian public is being asked to accept greater responsibility for the management of their health information. However, the implementation of the PCEHR has occasioned poor adoption rates underscored by criticism from stakeholders with concerns about transparency, accountability, privacy, confidentiality, governance, and limited capabilities. This study adopts an ethnographic lens to observe how information is created and used during the patient journey and the social factors impacting on the adoption of the PCEHR at the micro-level in order to develop a conceptual model that will encourage the sharing of patient information within the cycle of care. Objective: This study aims to firstly, establish a basic understanding of healthcare professional attitudes toward a national platform for sharing patient summary information in the form of a PCEHR. Secondly, the studies aims to map the flow of patient related information as it traverses a patient’s personal cycle of care. Thus, an ethnographic approach was used to bring a “real world” lens to information flow in a series of case studies in the Australian healthcare system to discover themes and issues that are important from the patient’s perspective. Design: Qualitative study utilising ethnographic case studies. Setting: Case studies were conducted at primary and allied healthcare professionals located in Brisbane Queensland between October 2013 and July 2014. Results: In the first dimension, it was identified that healthcare professionals’ concerns about trust and medico-legal issues related to patient control and information quality, and the lack of clinical value available with the PCEHR emerged as significant barriers to use. The second dimension of the study which attempted to map patient information flow identified information quality issues, clinical workflow inefficiencies and interoperability misconceptions resulting in duplication of effort, unnecessary manual processes, data quality and integrity issues and an over reliance on the understanding and communication skills of the patient. Conclusion: Opportunities for process efficiencies, improved data quality and increased patient safety emerge with the adoption of an appropriate information sharing platform. More importantly, large scale eHealth initiatives must be aligned with the value proposition of individual stakeholders in order to achieve widespread adoption. Leveraging an Australian national eHealth infrastructure and the PCEHR we offer a practical example of a service driven digital ecosystem suitable for co-creating value in healthcare.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
Adaptions of weighted rank regression to the accelerated failure time model for censored survival data have been successful in yielding asymptotically normal estimates and flexible weighting schemes to increase statistical efficiencies. However, for only one simple weighting scheme, Gehan or Wilcoxon weights, are estimating equations guaranteed to be monotone in parameter components, and even in this case are step functions, requiring the equivalent of linear programming for computation. The lack of smoothness makes standard error or covariance matrix estimation even more difficult. An induced smoothing technique overcame these difficulties in various problems involving monotone but pure jump estimating equations, including conventional rank regression. The present paper applies induced smoothing to the Gehan-Wilcoxon weighted rank regression for the accelerated failure time model, for the more difficult case of survival time data subject to censoring, where the inapplicability of permutation arguments necessitates a new method of estimating null variance of estimating functions. Smooth monotone parameter estimation and rapid, reliable standard error or covariance matrix estimation is obtained.
Resumo:
Troxel, Lipsitz, and Brennan (1997, Biometrics 53, 857-869) considered parameter estimation from survey data with nonignorable nonresponse and proposed weighted estimating equations to remove the biases in the complete-case analysis that ignores missing observations. This paper suggests two alternative modifications for unbiased estimation of regression parameters when a binary outcome is potentially observed at successive time points. The weighting approach of Robins, Rotnitzky, and Zhao (1995, Journal of the American Statistical Association 90, 106-121) is also modified to obtain unbiased estimating functions. The suggested estimating functions are unbiased only when the missingness probability is correctly specified, and misspecification of the missingness model will result in biases in the estimates. Simulation studies are carried out to assess the performance of different methods when the covariate is binary or normal. For the simulation models used, the relative efficiency of the two new methods to the weighting methods is about 3.0 for the slope parameter and about 2.0 for the intercept parameter when the covariate is continuous and the missingness probability is correctly specified. All methods produce substantial biases in the estimates when the missingness model is misspecified or underspecified. Analysis of data from a medical survey illustrates the use and possible differences of these estimating functions.
Resumo:
The efficiency with which a small beam trawl (1 x 0.5 m mouth) sampled postlarvae and juveniles of tiger prawns Penaeus esculentus and P, semisulcatus at night was estimated in 3 tropical seagrass communities (dominated by Thalassia hemprichii, Syringodium isoetifolium and Enhalus acoroides, respectively) in the shallow waters of the Gulf of Carpentaria in northern Australia. An area of seagrass (40 x 3 m) was enclosed by a net and the beam trawl was repeatedly hand-hauled over the substrate. Net efficiency (q) was calculated using 4 methods: the unweighted Leslie, weighted Leslie, DeLury and Maximum-likelihood (ML) methods. The Maximum-likelihood is the preferred method for estimating efficiency because it makes the fewest assumptions and is not affected by zero catches. The major difference in net efficiencies was between postlarvae (mean ML q +/- 95% confidence limits = 0.66 +/- 0.16) and juveniles of both species (mean q for juveniles in water less than or equal to 1.0 m deep = 0.47 +/- 0.05), i.e. the beam trawl was more efficient at capturing postlarvae than juveniles. There was little difference in net efficiency for P, esculentus between seagrass types (T, hemprichii versus S. isoetifolium), even though the biomass and morphologies of seagrass in these communities differed greatly (biomasses were 54 and 204 g m(-2), respectively). The efficiency of the net appeared to be the same for juveniles of the 2 species in shallow water, but was lower for juvenile P, semisulcatus at high tide when the water was deeper (1.6 to 1.9 m) (0.35 +/- 0.08). The lower efficiency near the time of high tide is possibly because the prawns are more active at high than low tide, and can also escape above the net. Factors affecting net efficiency and alternative methods of estimating net efficiency are discussed.
Resumo:
In open-cut strip mining, waste material is placed in-pit to minimise operational mine costs. Slope failures in these spoil piles pose a significant safety risk to personnel, along with a financial risk from loss of equipment and scheduling delays. It has been observed that most spoil pile failures occur when the pit has been previously filled with water and then subsequently dewatered. The failures are often initiated at the base of spoil piles where the material can undergo significant slaking (disintegration) over time due to overburden pressure and water saturation. It is important to understand how the mechanical properties of base spoil material are affected by slaking when designing safe spoil pile slope angles, heights, and dewatering rates. In this study, fresh spoil material collected from a coal mine in Brown Basin Coalfield of Queensland, Australia was subjected to high overburden pressure (0 – 900 kPa) under saturated condition and maintained over a period of time (0 – 6 months) allowing the material to slake. To create the above conditions, laboratory designed pressure chambers were used. Once a spoil sample was slaked under certain overburden pressure over a period of time, it was tested for classification, permeability, and strength properties. Results of this testing program suggested that the slaking of saturated coal mine spoil increase with overburden pressure and the time duration over which the overburden pressure was maintained. Further, it was observed that shear strength and permeability of spoil decreased with increase in spoil slaking.
Resumo:
This research examined the influence of tectonic activity on submarine sedimentation processes, through a deposit-based analysis of turbidites in outcrop. A comprehensive field study of the Miocene Whakataki Formation yielded significant data that was analysed using methods of process-sedimentology, stratigraphy, and ichnology. Signatures of the tectonically active depositional environment were identifiable at very high resolution, from grain composition and texture to trace-fossil assemblages, as well as on a broader-scale in stratigraphic stacking patterns and structural deformation. From these results and environmental interpretations, an original facies characterisation and conceptual depositional model have been established.
Resumo:
This report summarises the findings of a case study on Queensland’s New Generation Rollingstock (NGR) Project carried out as part of SBEnrc Project 2.34 Driving Whole-of-life Efficiencies through BIM and Procurement. This case study is one of three exemplar projects studied in order to leverage academic research in defining indicators for measuring tangible and intangible benefits of Building Information Modelling (BIM) across a project’s life-cycle in infrastructure and buildings. The NGR is an AUD 4.4 billion project carried out under an Availability Payment Public-Private Partnership (PPP) between the Queensland Government and the Bomabardier-led QTECTIC consortium comprising Bombardier Transportation, John Laing, ITOCHU Corporation and Aberdeen Infrastructure Investments. BIM has been deployed on the project from conceptual stages to drive both design and the currently ongoing construction at the Wulkuraka Project Site. This case study sourced information from a series of semi-structured interviews covering a cross-section of key stakeholders on the project. The present research identified 25 benefits gained from implementing BIM processes and tools. Some of the most prominent benefits were those leading to improved outcomes and higher customer satisfaction such as improved communications, data and information management, and coordination. There were also a number of expected benefits for future phases such as: • Improved decision making through the use of BIM for managing assets • Improved models through BIM maturity • Better utilisation of BIM for procurement on similar future projects • New capacity to specify the content of BIM models within contracts There were also three benefits that were expected to have been achieved but were not realised on the NGR project. These were higher construction information quality levels, better alignment in design teams as well as project teams, and capability improvements in measuring the impact of BIM on construction safety. This report includes individual profiles describing each benefit as well as the tools and processes that enabled them. Four key BIM metrics were found to be currently in use and six more were identified as potential metrics for the future. This case study also provides insights into challenges associated with implementing BIM on a project of the size and complexity of the NGR. Procurement aspects and lessons learned for managers are also highlighted, including a list of recommendations for developing a framework to assess the benefits of BIM across the project life-cycle.