894 resultados para Scaling Criteria
Resumo:
Context: The Ober and Thomas tests are subjective and involve a "negative" or "positive" assessment, making them difficult to apply within the paradigm of evidence-based medicine. No authors have combined the subjective clinical assessment with an objective measurement for these special tests. Objective: To compare the subjective assessment of iliotibial band and iliopsoas flexibility with the objective measurement of a digital inclinometer, to establish normative values, and to provide an evidence-based critical criterion for determining tissue tightness. Design: Cross-sectional study. Setting: Clinical research laboratory. Patients or Other Participants: Three hundred recreational athletes (125 men, 175 women; 250 in injured group, 50 in control group). Main Outcome Measure(s): Iliotibial band and iliopsoas muscle flexibility were determined subjectively using the modified Ober and Thomas tests, respectively. Using a digital inclinometer, we objectively measured limb position. lnterrater reliability for the subjective assessment was compared between 2 clinicians for a random sample of 100 injured participants, who were classified subjectively as either negative or positive for iliotibial band and iliopsoas tightness. Percentage of agreement indicated interrater reliability for the subjective assessment. Results: For iliotibial band flexibility, the average inclinometer angle was -24.59 degrees +/- 7.27 degrees. A total of 432 limbs were subjectively assessed as negative (-27.13 degrees +/- 5.53 degrees) and 168 as positive (-16.29 degrees +/- 6.87 degrees). For iliopsoas flexibility, the average inclinometer angle was -10.60 degrees +/- 9.61 degrees. A total of 392 limbs were subjectively assessed as negative (-15.51 degrees +/- 5.82 degrees) and 208 as positive (0.34 degrees +/- 7.00 degrees). The critical criteria for iliotibial band and iliopsoas flexibility were determined to be -23.16 degrees and -9.69 degrees, respectively. Between-clinicians agreement was very good, ranging from 95.0% to 97.6% for the Thomas and Ober tests, respectively. Conclusions: Subjective assessments and instrumented measurements were combined to establish normative values and critical criterions for tissue flexibility for the modified Ober and Thomas tests.
Resumo:
Particulate matter research is essential because of the well known significant adverse effects of aerosol particles on human health and the environment. In particular, identification of the origin or sources of particulate matter emissions is of paramount importance in assisting efforts to control and reduce air pollution in the atmosphere. This thesis aims to: identify the sources of particulate matter; compare pollution conditions at urban, rural and roadside receptor sites; combine information about the sources with meteorological conditions at the sites to locate the emission sources; compare sources based on particle size or mass; and ultimately, provide the basis for control and reduction in particulate matter concentrations in the atmosphere. To achieve these objectives, data was obtained from assorted local and international receptor sites over long sampling periods. The samples were analysed using Ion Beam Analysis and Scanning Mobility Particle Sizer methods to measure the particle mass with chemical composition and the particle size distribution, respectively. Advanced data analysis techniques were employed to derive information from large, complex data sets. Multi-Criteria Decision Making (MCDM), a ranking method, drew on data variability to examine the overall trends, and provided the rank ordering of the sites and years that sampling was conducted. Coupled with the receptor model Positive Matrix Factorisation (PMF), the pollution emission sources were identified and meaningful information pertinent to the prioritisation of control and reduction strategies was obtained. This thesis is presented in the thesis by publication format. It includes four refereed papers which together demonstrate a novel combination of data analysis techniques that enabled particulate matter sources to be identified and sampling site/year ranked. The strength of this source identification process was corroborated when the analysis procedure was expanded to encompass multiple receptor sites. Initially applied to identify the contributing sources at roadside and suburban sites in Brisbane, the technique was subsequently applied to three receptor sites (roadside, urban and rural) located in Hong Kong. The comparable results from these international and national sites over several sampling periods indicated similarities in source contributions between receptor site-types, irrespective of global location and suggested the need to apply these methods to air pollution investigations worldwide. Furthermore, an investigation into particle size distribution data was conducted to deduce the sources of aerosol emissions based on particle size and elemental composition. Considering the adverse effects on human health caused by small-sized particles, knowledge of particle size distribution and their elemental composition provides a different perspective on the pollution problem. This thesis clearly illustrates that the application of an innovative combination of advanced data interpretation methods to identify particulate matter sources and rank sampling sites/years provides the basis for the prioritisation of future air pollution control measures. Moreover, this study contributes significantly to knowledge based on chemical composition of airborne particulate matter in Brisbane, Australia and on the identity and plausible locations of the contributing sources. Such novel source apportionment and ranking procedures are ultimately applicable to environmental investigations worldwide.
Resumo:
To date, the formation of deposits on heat exchanger surfaces is the least understood problem in the design of heat exchangers for processing industries. Dr East has related the structure of the deposits to solution composition and has developed predictive models for composite fouling of calcium oxalate and silica in sugar factory evaporators.
Resumo:
The overall aim of our research was to characterize airborne particles from selected nanotechnology processes and to utilize the data to develop and test quantitative particle concentration-based criteria that can be used to trigger an assessment of particle emission controls. We investigated particle number concentration (PNC), particle mass (PM) concentration, count median diameter (CMD), alveolar deposited surface area, elemental composition, and morphology from sampling of aerosols arising from six nanotechnology processes. These included fibrous and non-fibrous particles, including carbon nanotubes (CNTs). We adopted standard occupational hygiene principles in relation to controlling peak emission and exposures, as outlined by both Safe Work Australia, (1) and the American Conference of Governmental Industrial Hygienists (ACGIH®). (2) The results from the study were used to analyses peak and 30-minute averaged particle number and mass concentration values measured during the operation of the nanotechnology processes. Analysis of peak (highest value recorded) and 30-minute averaged particle number and mass concentration values revealed: Peak PNC20–1000 nm emitted from the nanotechnology processes were up to three orders of magnitude greater than the local background particle concentration (LBPC). Peak PNC300–3000 nm was up to an order of magnitude greater, and PM2.5 concentrations up to four orders of magnitude greater. For three of these nanotechnology processes, the 30-minute average particle number and mass concentrations were also significantly different from the LBPC (p-value < 0.001). We propose emission or exposure controls may need to be implemented or modified, or further assessment of the controls be undertaken, if concentrations exceed three times the LBPC, which is also used as the local particle reference value, for more than a total of 30 minutes during a workday, and/or if a single short-term measurement exceeds five times the local particle reference value. The use of these quantitative criteria, which we are terming the universal excursion guidance criteria, will account for the typical variation in LBPC and inaccuracy of instruments, while precautionary enough to highlight peaks in particle concentration likely to be associated with particle emission from the nanotechnology process. Recommendations on when to utilize local excursion guidance criteria are also provided.
Resumo:
We review and discuss recent developments in best–worst scaling (BWS) that allow researchers to measure items or objects on measurement scales with known properties. We note that BWS has some distinct advantages compared with other measurement approaches, such as category rating scales or paired comparisons. We demonstrate how to use BWS to measure subjective quantities in two different empirical examples. One of these measures preferences for weekend getaways and requires comparing relatively few objects; a second measures academics' perceptions of the quality of academic marketing journals and requires comparing a significantly large set of objects. We conclude by discussing some limitations and future research opportunities related to BWS.
Resumo:
Increasing global competition, rapid technological changes, advances in manufacturing and information technology and discerning customers are forcing supply chains to adopt improvement practices that enable them to deliver high quality products at a lower cost and in a shorter period of time. A lean initiative is one of the most effective approaches toward achieving this goal. In the lean improvement process, it is critical to measure current and desired performance level in order to clearly evaluate the lean implementation efforts. Many attempts have tried to measure supply chain performance incorporating both quantitative and qualitative measures but failed to provide an effective method of measuring improvements in performances for dynamic lean supply chain situations. Therefore, the necessity of appropriate measurement of lean supply chain performance has become imperative. There are many lean tools available for supply chains; however, effectiveness of a lean tool depends on the type of the product and supply chain. One tool may be highly effective for a supply chain involved in high volume products but may not be effective for low volume products. There is currently no systematic methodology available for selecting appropriate lean strategies based on the type of supply chain and market strategy This thesis develops an effective method to measure the performance of supply chain consisting of both quantitative and qualitative metrics and investigates the effects of product types and lean tool selection on the supply chain performance Supply chain performance matrices and the effects of various lean tools over performance metrics mentioned in the SCOR framework have been investigated. A lean supply chain model based on the SCOR metric framework is then developed where non- lean and lean as well as quantitative and qualitative metrics are incorporated in appropriate metrics. The values of appropriate metrics are converted into triangular fuzzy numbers using similarity rules and heuristic methods. Data have been collected from an apparel manufacturing company for multiple supply chain products and then a fuzzy based method is applied to measure the performance improvements in supply chains. Using the fuzzy TOPSIS method, which chooses an optimum alternative to maximise similarities with positive ideal solutions and to minimise similarities with negative ideal solutions, the performances of lean and non- lean supply chain situations for three different apparel products have been evaluated. To address the research questions related to effective performance evaluation method and the effects of lean tools over different types of supply chains; a conceptual framework and two hypotheses are investigated. Empirical results show that implementation of lean tools have significant effects over performance improvements in terms of time, quality and flexibility. Fuzzy TOPSIS based method developed is able to integrate multiple supply chain matrices onto a single performance measure while lean supply chain model incorporates qualitative and quantitative metrics. It can therefore effectively measure the improvements for supply chain after implementing lean tools. It is demonstrated that product types involved in the supply chain and ability to select right lean tools have significant effect on lean supply chain performance. Future study can conduct multiple case studies in different contexts.
Resumo:
A crucial task in contractor prequalification is to establish a set of decision criteria through which the capabilities of contractors are measured and judged. However, in the UK, there are no nationwide standards or guidelines governing the selection of decision criteria for contractor prequalification. The decision criteria are usually established by individual clients on an ad hoc basis. This paper investigates the divergence of decision criteria used by different client and consultant organisations in contractor prequalification through a large empirical survey conducted in the UK. The results indicate that there are significant differences in the selection and use of decision criteria for prequalification.
Resumo:
This thesis presents a multi-criteria optimisation study of group replacement schedules for water pipelines, which is a capital-intensive and service critical decision. A new mathematical model was developed, which minimises total replacement costs while maintaining a satisfactory level of services. The research outcomes are expected to enrich the body of knowledge of multi-criteria decision optimisation, where group scheduling is required. The model has the potential to optimise replacement planning for other types of linear asset networks resulting in bottom-line benefits for end users and communities. The results of a real case study show that the new model can effectively reduced the total costs and service interruptions.
Resumo:
The purpose of this paper is to introduce the concept of hydraulic damage and its numerical integration. Unlike the common phenomenological continuum damage mechanics approaches, the procedure introduced in this paper relies on mature concepts of homogenization, linear fracture mechanics, and thermodynamics. The model is applied to the problem of fault reactivation within resource reservoirs. The results show that propagation of weaknesses is highly driven by the contrasts of properties in porous media. In particular, it is affected by the fracture toughness of host rocks. Hydraulic damage is diffused when it takes place within extended geological units and localized at interfaces and faults.
Resumo:
Studies on quantitative fit analysis of precontoured fracture fixation plates emerged within the last few years and therefore, there is a wide research gap in this area. Quantitative fit assessment facilitates the measure of the gap between a fracture fixation plate and the underlying bone, and specifies the required plate fit criteria. For clinically meaningful fit assessment outcome, it is necessary to establish the appropriate criteria and parameter. The present paper studies this subject and recommends using multiple fit criteria and the maximum distance between the plate and underlying bone as fit parameter for clinically relevant outcome. We also propose the development of a software tool for automatic plate positioning and fit assessment for the purpose of implant design validation and optimization in an effort to provide better fitting implant that can assist proper fracture healing. The fundamental specifications of the software are discussed.
Resumo:
A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled – replicated or deleted, to accommodate the user’s load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem’s knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
Resumo:
This study attempts to provide a criteria-based approach that can be used to evaluate the potential for technology transfer and commercialisation of a new technology from university research. More specifically, this study offers the critical factors for assessing marketability and feasibility of an innovation for the commercialisation and technology transfer process. The Delphi technique has been used to refine and categorise assessment criteria identified from various models and frameworks that emerged from literature. Proposed categories of criteria that are found to be important in the evaluation and assessment of a new technology for the commercialisation purpose include: Technological Readiness; Legal and Regulatory; Social Benefits and Impact; Economic and Market Factors.
Resumo:
This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e., the autonomous vehicles' ability to make appropriate driving decisions in city road traffic situations. The paper explains the overall controls system architecture, the decision making task decomposition, and focuses on how Multiple Criteria Decision Making (MCDM) is used in the process of selecting the most appropriate driving maneuver from the set of feasible ones. Experimental tests show that MCDM is suitable for this new application area.
Resumo:
The nonlinear stability analysis introduced by Chen and Haughton [1] is employed to study the full nonlinear stability of the non-homogeneous spherically symmetric deformation of an elastic thick-walled sphere. The shell is composed of an arbitrary homogeneous, incompressible elastic material. The stability criterion ultimately requires the solution of a third-order nonlinear ordinary differential equation. Numerical calculations performed for a wide variety of well-known incompressible materials are then compared with existing bifurcation results and are found to be identical. Further analysis and comparison between stability and bifurcation are conducted for the case of thin shells and we prove by direct calculation that the two criteria are identical for all modes and all materials.