346 resultados para real impedance generator
Resumo:
The authors currently engage in two projects to improve human-computer interaction (HCI) designs that can help conserve resources. The projects explore motivation and persuasion strategies relevant to ubiquitous computing systems that bring real-time consumption data into the homes and hands of residents in Brisbane, Australia. The first project seeks to increase understanding among university staff of the tangible and negative effects that excessive printing has on the workplace and local environment. The second project seeks to shift attitudes toward domestic energy conservation through software and hardware that monitor real-time, in situ electricity consumption in homes across Queensland. The insights drawn from these projects will help develop resource consumption user archetypes, providing a framework linking people to differing interface design requirements.
Resumo:
Nonlinear filter generators are common components used in the keystream generators for stream ciphers and more recently for authentication mechanisms. They consist of a Linear Feedback Shift Register (LFSR) and a nonlinear Boolean function to mask the linearity of the LFSR output. Properties of the output of a nonlinear filter are not well studied. Anderson noted that the m-tuple output of a nonlinear filter with consecutive taps to the filter function is unevenly distributed. Current designs use taps which are not consecutive. We examine m-tuple outputs from nonlinear filter generators constructed using various LFSRs and Boolean functions for both consecutive and uneven (full positive difference sets where possible) tap positions. The investigation reveals that in both cases, the m-tuple output is not uniform. However, consecutive tap positions result in a more biased distribution than uneven tap positions, with some m-tuples not occurring at all. These biased distributions indicate a potential flaw that could be exploited for cryptanalysis
Resumo:
This paper anatomises emerging developments in online community engagement in a major global industry: real estate. Economists argue that we are entering a ‘social network economy’ in which ‘complex social networks’ govern consumer choice and product value. In the light of this, organisations are shifting from thinking and behaving in the conventional ‘value chain’ model--in which exchanges between firms and customers are one-way only, from the firm to the consumer--to the ‘value ecology’ model, in which consumers and their networks become co-creators of the value of the product. This paper studies the way in which the global real estate industry is responding to this environment. This paper identifies three key areas in which online real estate ‘value ecology’ work is occurring: real estate social networks, games, and locative media / augmented reality applications. Uptake of real estate applications is, of course, user-driven: the paper not only highlights emerging innovations; it also identifies which of these innovations are actually being taken up by users, and the content contributed as a result. The paper thus provides a case study of one major industry’s shift into a web 2.0 communication model, focusing on emerging trends and issues.
Resumo:
The emergence of mobile and ubiquitous computing technology has created what is often referred to as the hybrid space – a virtual layer of digital information and interaction opportunities that sit on top of and augment the physical environment. Embodied media materialise digital information as observable and sometimes interactive parts of the physical environment. The aim of this work is to explore ways to enhance people’s situated real world experience, and to find out what the role and impact of embodied media in achieving this goal can be. The Edge, an initiative of the State Library of Queensland in Brisbane, Australia, and case study of this thesis, envisions to be a physical place for people to meet, explore, experience, learn and teach each other creative practices in various areas related to digital technology and arts. Guided by an Action Research approach, this work applies Lefebvre’s triad of space (1991) to investigate the Edge as a social space from a conceived, perceived and lived point of view. Based on its creators’ vision and goals on the conceived level, different embodied media are iteratively designed, implemented and evaluated towards shaping and amplifying the Edge’s visitor experience on the perceived and lived level.
Resumo:
A number of advanced driver assistance systems (ADAS) are currently being released on the market, providing safety functions to the drivers such as collision avoidance, adaptive cruise control or enhanced night-vision. These systems however are inherently limited by their sensory range: they cannot gather information from outside this range, also called their “perceptive horizon”. Cooperative systems are a developing research avenue that aims at providing extended safety and comfort functionalities by introducing vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications to the road actors. This paper presents the problematic of cooperative systems, their advantages and contributions to road safety and exposes some limitations related to market penetration, sensors accuracy and communications scalability. It explains the issues of how to implement extended perception, a central contribution of cooperative systems. The initial steps of an evaluation of data fusion architectures for extended perception are exposed.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
The main objective of this paper is to detail the development of a feasible hardware design based on Evolutionary Algorithms (EAs) to determine flight path planning for Unmanned Aerial Vehicles (UAVs) navigating terrain with obstacle boundaries. The design architecture includes the hardware implementation of Light Detection And Ranging (LiDAR) terrain and EA population memories within the hardware, as well as the EA search and evaluation algorithms used in the optimizing stage of path planning. A synthesisable Very-high-speed integrated circuit Hardware Description Language (VHDL) implementation of the design was developed, for realisation on a Field Programmable Gate Array (FPGA) platform. Simulation results show significant speedup compared with an equivalent software implementation written in C++, suggesting that the present approach is well suited for UAV real-time path planning applications.
Resumo:
The Queensland University of Technology badges itself as “a university for the real world”. For the last decade the Law Faculty has aimed to provide its students with a ‘real world’ degree, that is, a practical law degree. This has seen skills such as research, advocacy and negotiation incorporated into the undergraduate degree under a University Teaching & Learning grant, a project that gained international recognition and praise. In 2007–2008 the Law Faculty undertook another curriculum review of its undergraduate law degree. As a result of the two year review, QUT’s undergraduate lawdegree has fewer core units, a focus on first year student transition, scaffolding of law graduate capabilities throughout the degree,work integrated learning and transition to the workplace. The revised degree commenced implementation in 2009. This paper focuses on the “real world” approach to the degree achieved through the first year programme, embedding and scaffolding law graduate capabilities through authentic and valid assessment and work integrated learning.
Resumo:
Drink driving causes more fatal crashes than any other single factor on Australian roads, with a third of crashes having alcohol as a contributing factor. In recent years there has been a plateau in the numbers of drink drivers apprehended by RBT, and around 12% of the general population in self report surveys admit to drinking and driving. There is limited information about the first offender group, particularly the subgroup of these offenders who admit to prior drink driving, the offence therefore being the “first time caught”. This research focuses on the differences between those who report drink driving prior to apprehension for the offence and those who don’t. Methods: 201 first time drink driving offenders were interviewed at the time of their court appearance. Information was collected on socio-demographic variables, driving behaviour, method of apprehension, offence information, alcohol use and self reported previous drink driving. Results: 78% of respondents reported that they had driven over the legal alcohol limit in the 6 months prior to the offence. Analyses revealed that those offenders who had driven over the limit previously without being caught were more likely to be younger and have an issue with risky drinking. When all variables were taken into account in a multivariate model using logistic regression, only risky drinking emerged as significantly related to past drink driving. High risk drinkers were 4.8 times more likely to report having driven over the limit without being apprehended in the previous 6 months. Conclusion: The majority of first offenders are those who are “first time apprehended” rather than “first time drink drivers”. Having an understanding of the differences between these groups may alter the focus of educational or rehabilitation countermeasures. This research is part of a larger project aiming to target first time apprehended offenders for tailored intervention.
Resumo:
The Internet presents a constantly evolving frontier for criminology and policing, especially in relation to online predators – paedophiles operating within the Internet for safer access to children, child pornography and networking opportunities with other online predators. The goals of this qualitative study are to undertake behavioural research – identify personality types and archetypes of online predators and compare and contrast them with behavioural profiles and other psychological research on offline paedophiles and sex offenders. It is also an endeavour to gather intelligence on the technological utilisation of online predators and conduct observational research on the social structures of online predator communities. These goals were achieved through the covert monitoring and logging of public activity within four Internet Relay Chat(rooms) (IRC) themed around child sexual abuse and which were located on the Undernet network. Five days of monitoring was conducted on these four chatrooms between Wednesday 1 to Sunday 5 April 2009; this raw data was collated and analysed. The analysis identified four personality types – the gentleman predator, the sadist, the businessman and the pretender – and eight archetypes consisting of the groomers, dealers, negotiators, roleplayers, networkers, chat requestors, posters and travellers. The characteristics and traits of these personality types and archetypes, which were extracted from the literature dealing with offline paedophiles and sex offenders, are detailed and contrasted against the online sexual predators identified within the chatrooms, revealing many similarities and interesting differences particularly with the businessman and pretender personality types. These personality types and archetypes were illustrated by selecting users who displayed the appropriate characteristics and tracking them through the four chatrooms, revealing intelligence data on the use of proxies servers – especially via the Tor software – and other security strategies such as Undernet’s host masking service. Name and age changes, which is used as a potential sexual grooming tactic was also revealed through the use of Analyst’s Notebook software and information on ISP information revealed the likelihood that many online predators were not using any safety mechanism and relying on the anonymity of the Internet. The activities of these online predators were analysed, especially in regards to child sexual grooming and the ‘posting’ of child pornography, which revealed a few of the methods in which online predators utilised new Internet technologies to sexually groom and abuse children – using technologies such as instant messengers, webcams and microphones – as well as store and disseminate illegal materials on image sharing websites and peer-to-peer software such as Gigatribe. Analysis of the social structures of the chatrooms was also carried out and the community functions and characteristics of each chatroom explored. The findings of this research have indicated several opportunities for further research. As a result of this research, recommendations are given on policy, prevention and response strategies with regards to online predators.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
In total, 782 Escherichia coli strains originating from various host sources have been analyzed in this study by using a highly discriminatory single-nucleotide polymorphism (SNP) approach. A set of eight SNPs, with a discrimination value (Simpson's index of diversity [D]) of 0.96, was determined using the Minimum SNPs software, based on sequences of housekeeping genes from the E. coli multilocus sequence typing (MLST) database. Allele-specific real-time PCR was used to screen 114 E. coli isolates from various fecal sources in Southeast Queensland (SEQ). The combined analysis of both the MLST database and SEQ E. coli isolates using eight high-D SNPs resolved the isolates into 74 SNP profiles. The data obtained suggest that SNP typing is a promising approach for the discrimination of host-specific groups and allows for the identification of human-specific E. coli in environmental samples. However, a more diverse E. coli collection is required to determine animal- and environment-specific E. coli SNP profiles due to the abundance of human E. coli strains (56%) in the MLST database.
Resumo:
Background This economic evaluation reports the results of a detailed study of the cost of major trauma treated at Princess Alexandra Hospital (PAH), Australia. Methods A bottom-up approach was used to collect and aggregate the direct and indirect costs generated by a sample of 30 inpatients treated for major trauma at PAH in 2004. Major trauma was defined as an admission for Multiple Significant Trauma with an Injury Severity Score >15. Direct and indirect costs were amalgamated from three sources, (1) PAH inpatient costs, (2) Medicare Australia, and (3) a survey instrument. Inpatient costs included the initial episode of inpatient care including clinical and outpatient services and any subsequent representations for ongoing-related medical treatment. Medicare Australia provided an itemized list of pharmaceutical and ambulatory goods and services. The survey instrument collected out-of-pocket expenses and opportunity cost of employment forgone. Inpatient data obtained from a publically funded trauma registry were used to control for any potential bias in our sample. Costs are reported in Australian dollars for 2004 and 2008. Results The average direct and indirect costs of major trauma incurred up to 1-year postdischarge were estimated to be A$78,577 and A$24,273, respectively. The aggregate costs, for the State of Queensland, were estimated to range from A$86.1 million to $106.4 million in 2004 and from A$135 million to A$166.4 million in 2008. Conclusion These results demonstrate that (1) the costs of major trauma are significantly higher than previously reported estimates and (2) the cost of readmissions increased inpatient costs by 38.1%.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
The new configuration proposed in this paper for Marx Generator (MG) aims to generate high voltage for pulsed power applications through reduced number of semiconductor components with a more efficient load supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take advantage of resonant phenomenon in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and eventually the charged capacitors will be connected in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulated models of this converter have been investigated in Matlab/SIMULINK platform and a prototype set up has been implemented in laboratory. The acquired results of either fully satisfy the anticipations in proper operation of the converter.