863 resultados para Design quality
Resumo:
Developments of surgical attachments for bone-anchored prostheses are slowly but surely winning over the initial disbelief in the orthopedic community. Clearly, this option is becoming accessible to a wide range of individuals with limb loss. Seminal studies have demonstrated that the pioneering procedure relying on screw-type fixation engenders major clinical benefits and acceptable safety. The surgical procedure for press-fit implants, such as the Integral-Leg-Prosthesis (ILP) has been described Dr Aschoff and his team. Some clinical benefits of press-fit implants have been also established. Here, his team is once again taking a leading role by sharing the progression over 15 years of the rate of deep infections for 69 individuals with transfemoral amputation fitted with three successive refined versions of the ILP. By definition, a double-blind randomized clinical trial to test the effect of different fixation’s design is difficult. Alternatively, Juhnke and colleagues are reporting the outcomes of action-research study for a cohort of participants. The first and foremost important outcome of this study is the confirmation that the current design of the IPL and rehabilitation program are altogether leading to an acceptable rate of deep infection and other adverse events (e.g., structural failure of implant, periprosthetic factures). This study is also providing a strong insight onto the effect of major phases in redesign of an implant on the risk of infection. This is an important reminder that the development of a successful osseointegrated implant is unlikely to be immediate but the results of a learning curve made of empirical and sequential changes led by a reflective clinical practice. Clearly, this study provided better understanding of the safety of the ILP surgical and rehabilitation procedure while establishing standards and benchmark data for future studies focusing on design and infection of press-fit implants. Complementary observations of relationship between infection and cofounders such as loading of the prosthesis and prosthetic components used would be beneficial.Further definitive evidences of the clinical benefits with the latest design would be valuable, although an increase in health related quality of life and functional outcomes are likely to be confirmed. Altogether, the authors are providing compelling evidence that bone-anchored attachments particularly those relying on press-fit implants are an established alternative to socket prostheses.
Resumo:
This paper develops theory that quantifies transit route passenger-relative load factor and distinguishes it from occupancy load factor. The ratio between these measures is defined as the load diversity coefficient, which as a single measure characterizes the diversity of passenger load factor between route segments according to the origin-destination profile. The relationship between load diversity coefficient and route coefficient of variation in occupancy load factor is quantified. Two tables are provided that enhance passenger capacity and quality of service (QoS) assessment regarding onboard passenger load. The first expresses the transit operator’s perspective of load diversity and the passengers’ perspective of load factor relative to the operator’s, across six service levels corresponding to ranges of coefficient of variation in occupancy load factor. The second interprets the relationships between passenger average travel time and each of passenger-relative load factor and occupancy load factor. The application of this methodology is illustrated using a case study of a premium radial bus route in Brisbane, Australia. The methodology can assist in benchmarking and decision making regarding route and schedule design. Future research will apply value of time to QoS measurement, reflecting perceived passenger comfort through crowding and average time spent aboard. This would also assist in transit service quality econometric modeling.
Resumo:
Background: Falls among hospitalised patients impose a considerable burden on health systems globally and prevention is a priority. Some patient-level interventions have been effective in reducing falls, but others have not. An alternative and promising approach to reducing inpatient falls is through the modification of the hospital physical environment and the night lighting of hospital wards is a leading candidate for investigation. In this pilot trial, we will determine the feasibility of conducting a main trial to evaluate the effects of modified night lighting on inpatient ward level fall rates. We will test also the feasibility of collecting novel forms of patient level data through a concurrent observational sub-study. Methods/design: A stepped wedge, cluster randomised controlled trial will be conducted in six inpatient wards over 14 months in a metropolitan teaching hospital in Brisbane (Australia). The intervention will consist of supplementary night lighting installed across all patient rooms within study wards. The planned placement of luminaires, configurations and spectral characteristics are based on prior published research and pre-trial testing and modification. We will collect data on rates of falls on study wards (falls per 1000 patient days), the proportion of patients who fall once or more, and average length of stay. We will recruit two patients per ward per month to a concurrent observational sub-study aimed at understanding potential impacts on a range of patient sleep and mobility behaviour. The effect on the environment will be monitored with sensors to detect variation in light levels and night-time room activity. We will also collect data on possible patient-level confounders including demographics, pre-admission sleep quality, reported vision, hearing impairment and functional status. Discussion: This pragmatic pilot trial will assess the feasibility of conducting a main trial to investigate the effects of modified night lighting on inpatient fall rates using several new methods previously untested in the context of environmental modifications and patient safety. Pilot data collected through both parts of the trial will be utilised to inform sample size calculations, trial design and final data collection methods for a subsequent main trial.
Resumo:
The built environment is a major contributor to the world’s carbon dioxide emissions, with a considerable amount of energy being consumed in buildings due to heating, ventilation and air-conditioning, space illumination, use of electrical appliances, etc., to facilitate various anthropogenic activities. The development of sustainable buildings seeks to ameliorate this situation mainly by reducing energy consumption. Sustainable building design, however, is a complicated process involving a large number of design variables, each with a range of feasible values. There are also multiple, often conflicting, objectives involved such as the life cycle costs and occupant satisfaction. One approach to dealing with this is through the use of optimization models. In this paper, a new multi-objective optimization model is developed for sustainable building design by considering the design objectives of cost and energy consumption minimization and occupant comfort level maximization. In a case study demonstration, it is shown that the model can derive a set of suitable design solutions in terms of life cycle cost, energy consumption and indoor environmental quality so as to help the client and design team gain a better understanding of the design space and trade-off patterns between different design objectives. The model can very useful in the conceptual design stages to determine appropriate operational settings to achieve the optimal building performance in terms of minimizing energy consumption and maximizing occupant comfort level.
Resumo:
Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.
Resumo:
Rapid growth in the global population requires expansion of building stock, which in turn calls for increased energy demand. This demand varies in time and also between different buildings, yet, conventional methods are only able to provide mean energy levels per zone and are unable to capture this inhomogeneity, which is important to conserve energy. An additional challenge is that some of the attempts to conserve energy, through for example lowering of ventilation rates, have been shown to exacerbate another problem, which is unacceptable indoor air quality (IAQ). The rise of sensing technology over the past decade has shown potential to address both these issues simultaneously by providing high–resolution tempo–spatial data to systematically analyse the energy demand and its consumption as well as the impacts of measures taken to control energy consumption on IAQ. However, challenges remain in the development of affordable services for data analysis, deployment of large–scale real–time sensing network and responding through Building Energy Management Systems. This article presents the fundamental drivers behind the rise of sensing technology for the management of energy and IAQ in urban built environments, highlights major challenges for their large–scale deployment and identifies the research gaps that should be closed by future investigations.
Resumo:
In this paper, we present the design and characterization of a vibratory yaw rate MEMS sensor that uses in-plane motion for both actuation and sensing. The design criterion for the rate sensor is based on a high sensitivity and low bandwidth. The required sensitivity of the yawrate sensor is attained by using the inplane motion in which the dominant damping mechanism is the fluid loss due to slide film damping i.e. two-three orders of magnitude less than the squeeze-film damping in other rate sensors with out-of-plane motion. The low bandwidth is achieved by matching the drive and the sense mode frequencies. Based on these factors, the yaw rate sensor is designed and finally realized using surface micromachining. The inplane motion of the sensor is experimentally characterized to determine the sense and the drive mode frequencies, and corresponding damping ratios. It is found that the experimental results match well with the numerical and the analytical models with less than 5% error in frequencies measurements. The measured quality factor of the sensor is approximately 467, which is two orders of magnitude higher than that for a similar rate sensor with out-of-plane sense direction.
Resumo:
Urban agglomerations—where innovation and knowledge generation activities take place—are in a tough competition to become a major player in the global knowledge economy. It is claimed that soft measures—namely quality of life and place—help in fostering and attracting talent, and consequently draw investment to these urban localities. This paper aims to scrutinise the role of soft measures in supporting urban competitiveness through a critical review of the scholarly literature. The findings shed some light on whether there is a symbiotic relationship between place quality and urban competitiveness. The paper also points out directions for future investigations.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
The current approach for protecting the receiving water environment from urban stormwater pollution is the adoption of structural measures commonly referred to as Water Sensitive Urban Design (WSUD). The treatment efficiency of WSUD measures closely depends on the design of the specific treatment units. As stormwater quality is influenced by rainfall characteristics, the selection of appropriate rainfall events for treatment design is essential to ensure the effectiveness of WSUD systems. Based on extensive field investigations in four urban residential catchments based at Gold Coast, Australia, and computer modelling, this paper details a technically robust approach for the selection of rainfall events for stormwater treatment design using a three-component model. The modelling results confirmed that high intensity-short duration events produce 58.0% of TS load while they only generated 29.1% of total runoff volume. Additionally, rainfall events smaller than 6-month average recurrence interval (ARI) generates a greater cumulative runoff volume (68.4% of the total annual runoff volume) and TS load (68.6% of the TS load exported) than the rainfall events larger than 6-month ARI. The results suggest that for the study catchments, stormwater treatment design could be based on the rainfall which had a mean value of 31 mm/h average intensity and 0.4 h duration. These outcomes also confirmed that selecting smaller ARI rainfall events with high intensity-short duration as the threshold for treatment system design is the most feasible approach since these events cumulatively generate a major portion of the annual pollutant load compared to the other types of events, despite producing a relatively smaller runoff volume. This implies that designs based on small and more frequent rainfall events rather than larger rainfall events would be appropriate in the context of efficiency in treatment performance, cost-effectiveness and possible savings in land area needed.
Resumo:
Light is essential to life and vision; without light, nothing exists. It plays a pivotal role in the world of architectural design and is used to generate all manner of perceptions that enhance the designed environment experience. But what are the fundamental elements that designers rely upon to generate light enhanced experiences? How are people’s perceptions influenced by designed light schemas? In this book Dr. Marisha McAuliffe highlights the relationship that exists between light source and surface and how both create quality of effect in the built environment. Concepts relating to architectural lighting design history, theories, research, and generation of lighting design schemes to create optimal experiences in architecture, interior architecture and design are all explored in detail. This book is essential reading for both the student and the professional working in architectural lighting, particularly in terms of qualitative perception oriented lighting design
Resumo:
This research is connected with an education development project for the four-year-long officer education program at the National Defence University. In this curriculum physics was studied in two alternative course plans namely scientific and general. Observations connected to the later one e.g. student feedback and learning outcome gave indications that action was needed to support the course. The reform work was focused on the production of aligned course related instructional material. The learning material project produced a customized textbook set for the students of the general basic physics course. The research adapts phases that are typical in Design Based Research (DBR). The research analyses the feature requirements for physics textbook aimed at a specific sector and frames supporting instructional material development, and summarizes the experiences gained in the learning material project when the selected frames have been applied. The quality of instructional material is an essential part of qualified teaching. The goal of instructional material customization is to increase the product's customer centric nature and to enhance its function as a support media for the learning process. Textbooks are still one of the core elements in physics teaching. The idea of a textbook will remain but the form and appearance may change according to the prevailing technology. The work deals with substance connected frames (demands of a physics textbook according to the PER-viewpoint, quality thinking in educational material development), frames of university pedagogy and instructional material production processes. A wide knowledge and understanding of different frames are useful in development work, if they are to be utilized to aid inspiration without limiting new reasoning and new kinds of models. Applying customization even in the frame utilization supports creative and situation aware design and diminishes the gap between theory and practice. Generally, physics teachers produce their own supplementary instructional material. Even though customization thinking is not unknown the threshold to produce an entire textbook might be high. Even though the observations here are from the general physics course at the NDU, the research gives tools also for development in other discipline related educational contexts. This research is an example of an instructional material development work together the questions it uncovers, and presents thoughts when textbook customization is rewarding. At the same time, the research aims to further creative customization thinking in instruction and development. Key words: Physics textbook, PER (Physics Education Research), Instructional quality, Customization, Creativity
Resumo:
Query incentive networks capture the role of incentives in extracting information from decentralized information networks such as a social network. Several game theoretic tilt:Kids of query incentive networks have been proposed in the literature to study and characterize the dependence, of the monetary reward required to extract the answer for a query, on various factors such as the structure of the network, the level of difficulty of the query, and the required success probability.None of the existing models, however, captures the practical andimportant factor of quality of answers. In this paper, we develop a complete mechanism design based framework to incorporate the quality of answers, in the monetization of query incentive networks. First, we extend the model of Kleinberg and Raghavan [2] to allow the nodes to modulate the incentive on the basis of the quality of the answer they receive. For this qualify conscious model. we show are existence of a unique Nash equilibrium and study the impact of quality of answers on the growth rate of the initial reward, with respect to the branching factor of the network. Next, we present two mechanisms; the direct comparison mechanism and the peer prediction mechanism, for truthful elicitation of quality from the agents. These mechanisms are based on scoring rules and cover different; scenarios which may arise in query incentive networks. We show that the proposed quality elicitation mechanisms are incentive compatible and ex-ante budget balanced. We also derive conditions under which ex-post budget balance can beachieved by these mechanisms.
Resumo:
H.264 video standard achieves high quality video along with high data compression when compared to other existing video standards. H.264 uses context-based adaptive variable length coding (CAVLC) to code residual data in Baseline profile. In this paper we describe a novel architecture for CAVLC decoder including coeff-token decoder, level decoder total-zeros decoder and run-before decoder UMC library in 0.13 mu CMOS technology is used to synthesize the proposed design. The proposed design reduces chip area and improves critical path performance of CAVLC decoder in comparison with [1]. Macroblock level (including luma and chroma) pipeline processing for CAVLC is implemented with an average of 141 cycles (including pipeline buffering) per macroblock at 250MHz clock frequency. To compare our results with [1] clock frequency is constrained to 125MHz. The area required for the proposed architecture is 17586 gates, which is 22.1% improvement in comparison to [1]. We obtain a throughput of 1.73 * 10(6) macroblocks/second, which is 28% higher than that reported in [1]. The proposed design meets the processing requirement of 1080HD [5] video at 30frames/seconds.
Resumo:
802.11 WLANs are characterized by high bit error rate and frequent changes in network topology. The key feature that distinguishes WLANs from wired networks is the multi-rate transmission capability, which helps to accommodate a wide range of channel conditions. This has a significant impact on higher layers such as routing and transport levels. While many WLAN products provide rate control at the hardware level to adapt to the channel conditions, some chipsets like Atheros do not have support for automatic rate control. We first present a design and implementation of an FER-based automatic rate control state machine, which utilizes the statistics available at the device driver to find the optimal rate. The results show that the proposed rate switching mechanism adapts quite fast to the channel conditions. The hop count metric used by current routing protocols has proven itself for single rate networks. But it fails to take into account other important factors in a multi-rate network environment. We propose transmission time as a better path quality metric to guide routing decisions. It incorporates the effects of contention for the channel, the air time to send the data and the asymmetry of links. In this paper, we present a new design for a multi-rate mechanism as well as a new routing metric that is responsive to the rate. We address the issues involved in using transmission time as a metric and presents a comparison of the performance of different metrics for dynamic routing.