204 resultados para Matriz Input-Output
Resumo:
Custom designed for display on the Cube Installation situated in the new Science and Engineering Centre (SEC) at QUT, the ECOS project is a playful interface that uses real-time weather data to simulate how a five-star energy building operates in climates all over the world. In collaboration with the SEC building managers, the ECOS Project incorporates energy consumption and generation data of the building into an interactive simulation, which is both engaging to users and highly informative, and which invites play and reflection on the roles of green buildings. ECOS focuses on the principle that humans can have both a positive and negative impact on ecosystems with both local and global consequence. The ECOS project draws on the practice of Eco-Visualisation, a term used to encapsulate the important merging of environmental data visualization with the philosophy of sustainability. Holmes (2007) uses the term Eco-Visualisation (EV) to refer to data visualisations that ‘display the real time consumption statistics of key environmental resources for the goal of promoting ecological literacy’. EVs are commonly artifacts of interaction design, information design, interface design and industrial design, but are informed by various intellectual disciplines that have shared interests in sustainability. As a result of surveying a number of projects, Pierce, Odom and Blevis (2008) outline strategies for designing and evaluating effective EVs, including ‘connecting behavior to material impacts of consumption, encouraging playful engagement and exploration with energy, raising public awareness and facilitating discussion, and stimulating critical reflection.’ Consequently, Froehlich (2010) and his colleagues also use the term ‘Eco-feedback technology’ to describe the same field. ‘Green IT’ is another variation which Tomlinson (2010) describes as a ‘field at the juncture of two trends… the growing concern over environmental issues’ and ‘the use of digital tools and techniques for manipulating information.’ The ECOS Project team is guided by these principles, but more importantly, propose an example for how these principles may be achieved. The ECOS Project presents a simplified interface to the very complex domain of thermodynamic and climate modeling. From a mathematical perspective, the simulation can be divided into two models, which interact and compete for balance – the comfort of ECOS’ virtual denizens and the ecological and environmental health of the virtual world. The comfort model is based on the study of psychometrics, and specifically those relating to human comfort. This provides baseline micro-climatic values for what constitutes a comfortable working environment within the QUT SEC buildings. The difference between the ambient outside temperature (as determined by polling the Google Weather API for live weather data) and the internal thermostat of the building (as set by the user) allows us to estimate the energy required to either heat or cool the building. Once the energy requirements can be ascertained, this is then balanced with the ability of the building to produce enough power from green energy sources (solar, wind and gas) to cover its energy requirements. Calculating the relative amount of energy produced by wind and solar can be done by, in the case of solar for example, considering the size of panel and the amount of solar radiation it is receiving at any given time, which in turn can be estimated based on the temperature and conditions returned by the live weather API. Some of these variables can be altered by the user, allowing them to attempt to optimize the health of the building. The variables that can be changed are the budget allocated to green energy sources such as the Solar Panels, Wind Generator and the Air conditioning to control the internal building temperature. These variables influence the energy input and output variables, modeled on the real energy usage statistics drawn from the SEC data provided by the building managers.
Resumo:
Capacity of current and future high data rate wireless communications depend significantly on how well changes in the wireless channel are predicted and tracked. Generally, this can be estimated by transmitting known symbols. However, this increases overheads if the channel varies over time. Given today’s bandwidth demand and the increased necessity for mobile wireless devices, the contributions of this research are very significant. This study has developed a novel and efficient channel tracking algorithm that can recursively update the channel estimation for wireless broadband communications reducing overheads, therefore increasing the speed of wireless communication systems.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.
Resumo:
The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.
Resumo:
A critical requirement for safe autonomous navigation of a planetary rover is the ability to accurately estimate the traversability of the terrain. This work considers the problem of predicting the attitude and configuration angles of the platform from terrain representations that are often incomplete due to occlusions and sensor limitations. Using Gaussian Processes (GP) and exteroceptive data as training input, we can provide a continuous and complete representation of terrain traversability, with uncertainty in the output estimates. In this paper, we propose a novel method that focuses on exploiting the explicit correlation in vehicle attitude and configuration during operation by learning a kernel function from vehicle experience to perform GP regression. We provide an extensive experimental validation of the proposed method on a planetary rover. We show significant improvement in the accuracy of our estimation compared with results obtained using standard kernels (Squared Exponential and Neural Network), and compared to traversability estimation made over terrain models built using state-of-the-art GP techniques.
Resumo:
BACKGROUND: Diabetes in South Asia represents a different disease entity in terms of its onset, progression, and complications. In the present study, we systematically analyzed the medical research output on diabetes in South Asia. METHODS: The online SciVerse Scopus database was searched using the search terms "diabetes" and "diabetes mellitus" in the article Title, Abstract or Keywords fields, in conjunction with the names of each regional country in the Author Affiliation field. RESULTS: In total, 8478 research articles were identified. Most were from India (85.1%) and Pakistan (9.6%) and the contribution to the global diabetes research output was 2.1%. Publications from South Asia increased markedly after 2007, with 58.7% of papers published between 2000 and 2010 being published after 2007. Most papers were Research Articles (75.9%) and Reviews (12.9%), with only 90 (1.1%) clinical trials. Publications predominantly appeared in local national journals. Indian authors and institutions had the most number of articles and the highest h-index. There were 136 (1.6%) intraregional collaborative studies. Only 39 articles (0.46%) had >100 citations. CONCLUSIONS: Regional research output on diabetes mellitus is unsatisfactory, with only a minimal contribution to global diabetes research. Publications are not highly cited and only a few randomized controlled trials have been performed. In the coming decades, scientists in the region must collaborate and focus on practical and culturally acceptable interventional studies on diabetes mellitus.
Resumo:
The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland–Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.
Resumo:
Biodiesel, produced from renewable feedstock represents a more sustainable source of energy and will therefore play a significant role in providing the energy requirements for transportation in the near future. Chemically, all biodiesels are fatty acid methyl esters (FAME), produced from raw vegetable oil and animal fat. However, clear differences in chemical structure are apparent from one feedstock to the next in terms of chain length, degree of unsaturation, number of double bonds and double bond configuration-which all determine the fuel properties of biodiesel. In this study, prediction models were developed to estimate kinematic viscosity of biodiesel using an Artificial Neural Network (ANN) modelling technique. While developing the model, 27 parameters based on chemical composition commonly found in biodiesel were used as the input variables and kinematic viscosity of biodiesel was used as output variable. Necessary data to develop and simulate the network were collected from more than 120 published peer reviewed papers. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture and learning algorithm were optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the coefficient of determination (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found high predictive accuracy of the ANN in predicting fuel properties of biodiesel and has demonstrated the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties. Therefore the model developed in this study can be a useful tool to accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.
Resumo:
Low voltage distribution feeders with large numbers of single phase residential loads experience severe current unbalance that often causes voltage unbalance problems. The addition of intermittent generation and new loads in the form of roof top photovoltaic generation and electric vehicles makes these problems even more acute. In this paper, an intelligent dynamic residential load transfer scheme is proposed. Residential loads can be transferred from one phase to another phase to minimize the voltage unbalance along the feeder. Each house is supplied through a static transfer switch with three-phase input and single-phase output connection. The main controller, installed at the transformer will observe the power consumption in each load and determine which house(s) should be transferred from one phase to another in order to keep the voltage unbalance in the feeder at a minimum. The efficacy of the proposed load transfer scheme is verified through MATLAB and PSCAD/EMTDC simulations.
Resumo:
Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.
Resumo:
Executive Summary Emergency Departments (EDs) locally, nationally and internationally are becoming increasingly busy. Within this context, it can be challenging to deliver a health service that is safe, of high quality and cost-effective. Whilst various models are described within the literature that aim to measure ED ‘work’ or ‘activity’, they are often not linked to a measure of costs to provide such activity. It is important for hospital and ED managers to understand and apply this link so that optimal staffing and financial resourcing can be justifiably sought. This research is timely given that Australia has moved towards a national Activity Based Funding (ABF) model for ED activity. ABF is believed to increase transparency of care and fairness (i.e. equal work receives equal pay). ABF involves a person-, performance- or activity-based payment system, and thus a move away from historical “block payment” models that do not incentivise efficiency and quality. The aim of the Statewide Workforce and Activity-Based Funding Modelling Project in Queensland Emergency Departments (SWAMPED) is to identify and describe best practice Emergency Department (ED) workforce models within the current context of ED funding that operates under an ABF model. The study is comprised of five distinct phases. This monograph (Phase 1) comprises a systematic review of the literature that was completed in June 2013. The remaining phases include a detailed survey of Queensland hospital EDs’ resource levels, activity and operational models of care, development of new resource models, development of a user-friendly modelling interface for ED mangers, and production of a final report that identifies policy implications. The anticipated deliverable outcome of this research is the development of an ABF based Emergency Workforce Modelling Tool that will enable ED managers to profile both their workforce and operational models of care. Additionally, the tool will assist with the ability to more accurately inform adequate staffing numbers required in the future, inform planning of expected expenditures and be used for standardisation and benchmarking across similar EDs. Summary of the Findings Within the remit of this review of the literature, the main findings include: 1. EDs are becoming busier and more congested Rising demand, barriers to ED throughput and transitions of care all contribute to ED congestion. In addition requests by organisational managers and the community require continued broadening of the scope of services required of the ED and further increases in demand. As the population live longer with more lifestyle diseases their propensity to require ED care continues to grow. 2. Various models of care within EDs exist Models often vary to account for site specific characteritics to suit staffing profile, ED geographical location (e.g. metropolitan or rural site), and patient demographic profile (e.g. paediatrics, older persons, ethnicity). Existing and new models implemented within EDs often depend on the target outcome requiring change. Generally this is focussed on addressing issues at the input, throughput or output areas of the ED. Even with models targeting similar demographic or illness, the structure and process elements underpinning the model can vary, which can impact on outcomes and variance to the patient and carer experience between and within EDs. Major models of care to manage throughput inefficiencies include: A. Workforce Models of Care focus on the appropriate level of staffing for a given workload to provide prompt, timely and clinically effective patient care within an emergency care setting. The studies reviewed suggest that the early involvement of senior medical decision maker and/or specialised nursing roles such as Emergency Nurse Practitioners and Clinical Initiatives Nurse, primary contact or extended scope Allied Health Practitioners can facilitate patient flow and improve key indicators such as length of stay and reducing the number of those who did not wait to be seen amongst others. B. Operational Models of Care within EDs focus on mechanisms for streaming (e.g. fast-tracking) or otherwise grouping patient care based on acuity and complexity to assist with minimising any throughput inefficiencies. While studies support the positive impact of these models in general, it appears that they are most effective when they are adequately resourced. 3. Various methods of measuring ED activity exist Measuring ED activity requires careful consideration of models of care and staffing profile. Measuring activity requires the ability to account for factors including: patient census, acuity, LOS, intensity of intervention, department skill-mix plus an adjustment for non-patient care time. 4. Gaps in the literature Continued ED growth calls for new and innovative care delivery models that are safe, clinically effective and cost effective. New roles and stand-alone service delivery models are often evaluated in isolation without considering the global and economic impact on staffing profiles. Whilst various models of accounting for and measuring health care activity exist, costing studies and cost effectiveness studies are lacking for EDs making accurate and reliable assessments of care models difficult. There is a necessity to further understand, refine and account for measures of ED complexity that define a workload upon which resources and appropriate staffing determinations can be made into the future. There is also a need for continued monitoring and comprehensive evaluation of newly implemented workforce modelling tools. This research acknowledges those gaps and aims to: • Undertake a comprehensive and integrated whole of department workforce profiling exercise relative to resources in the context of ABF. • Inform workforce requirements based on traditional quantitative markers (e.g. volume and acuity) combined with qualitative elements of ED models of care; • Develop a comprehensive and validated workforce calculation tool that can be used to better inform or at least guide workforce requirements in a more transparent manner.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
This paper describes an interactive installation work set in a large dome space. The installation is an audio and physical re-rendition of an interactive writing work. In the original work, the user interacted via keyboard and screen while online. This rendition of the work retains the online interaction, but also places the interaction within a physical space, where the main 'conversation' takes place by the participant-audience speaking through microphones and listening through headphones. The work now also includes voice and SMS input, using speech-to-text and text-to-speech conversion technologies, and audio and displayed text for output. These additions allow the participant-audience to co-author the work while they participate in audible conversation with keyword-triggering characters (bots). Communication in the space can be person-to-computer via microphone, keyboard, and phone; person-to-person via machine and within the physical space; computer-to- computer; and computer-to-person via audio and projected text.
Resumo:
A key shift of thinking for effective learning and teaching of listening input has been seen and organized in education locally and globally. This study has probed whether metacognitive instruction through a pedagogical cycle shifts high-intermediate students' English language learning and English as a second language (ESL) teacher's teaching focus on listening input. Twenty male Iranian students with an age range of 18 to 24 received a guided methodology including metacognitive strategies (planning, monitoring, and evaluation) for a period of three months. This study has used the strategies and probed the importance of metacognitive instruction through interviewing both the teacher and the students. The results have shown that metacognitive instruction helped both the ESL teacher's and the students' shift of thinking about teaching and learning listening input. This key shift of thinking has implications globally and locally for classroom practices of listening input.
Resumo:
Motion control systems have a significant impact on the performance of ships and marine structures allowing them to perform tasks in severe sea states and during long periods of time. Ships are designed to operate with adequate reliability and economy, and in order to achieve this, it is essential to control the motion. For each type of ship and operation performed (transit, landing a helicopter, fishing, deploying and recovering loads, etc.), there are not only desired motion settings, but also limits on the acceptable (undesired) motion induced by the environment. The task of a ship motion control system is therefore to act on the ship so it follows the desired motion as closely as possible. This book provides an introduction to the field of ship motion control by studying the control system designs for course-keeping autopilots with rudder roll stabilisation and integrated rudder-fin roll stabilisation. These particular designs provide a good overview of the difficulties encountered by designers of ship motion control systems and, therefore, serve well as an example driven introduction to the field. The idea of combining the control design of autopilots with that of fin roll stabilisers, and the idea of using rudder induced roll motion as a sole source of roll stabilisation seems to have emerged in the late 1960s. Since that time, these control designs have been the subject of continuous and ongoing research. This ongoing interest is a consequence of the significant bearing that the control strategy has on the performance and the issues associated with control system design. The challenges of these designs lie in devising a control strategy to address the following issues: underactuation, disturbance rejection with a non minimum phase system, input and output constraints, model uncertainty, and large unmeasured stochastic disturbances. To date, the majority of the work reported in the literature has focused strongly on some of the design issues whereas the remaining issues have been addressed using ad hoc approaches. This has provided an additional motivation for revisiting these control designs and looking at the benefits of applying a contemporary design framework, which can potentially address the majority of the design issues.