800 resultados para Results Based Management
Resumo:
A novel H-bridge multilevel PWM converter topology based on a series connection of a high voltage (HV) diode-clamped inverter and a low voltage (LV) conventional inverter is proposed. A DC link voltage arrangement for the new hybrid and asymmetric solution is presented to have a maximum number of output voltage levels by preserving the adjacent switching vectors between voltage levels. Hence, a fifteen-level hybrid converter can be attained with a minimum number of power components. A comparative study has been carried out to present high performance of the proposed configuration to approach a very low THD of voltage and current, which leads to the possible elimination of output filter. Regarding the proposed configuration, a new cascade inverter is verified by cascading an asymmetrical diode-clamped inverter, in which nineteen levels can be synthesized in output voltage with the same number of components. To balance the DC link capacitor voltages for the maximum output voltage resolution as well as synthesise asymmetrical DC link combination, a new Multi-output Boost (MOB) converter is utilised at the DC link voltage of a seven-level H-bridge diode-clamped inverter. Simulation and hardware results based on different modulations are presented to confirm the validity of the proposed approach to achieve a high quality output voltage.
Resumo:
Shared leadership has been identified as a key governance base for the future of government and Catholic schools in Queensland, the state’s two largest providers of school education. Shared leadership values the contributions that many individuals can make through collaboration and teamwork. It claims to improve organisational performance and reduce the increasing pressures faced by principals. However despite these positive features, shared leadership is generally not well understood, not well accepted and not valued by those who practice or study leadership. A collective case study method was chosen, incorporating a series of semi-structured interviews with principals and the use of official school documents. The study has explored the current understanding and practice of shared leadership in four Queensland schools and investigated its potential for use.
Resumo:
Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.
Resumo:
This paper serves as a first study on the implementation of control strategies developed using a kinematic reduction onto test bed autonomous underwater vehicles (AUVs). The equations of motion are presented in the framework of differential geometry, including external dissipative forces, as a forced affine connection control system. We show that the hydrodynamic drag forces can be included in the affine connection, resulting in an affine connection control system. The definitions of kinematic reduction and decoupling vector field are thus extended from the ideal fluid scenario. Control strategies are computed using this new extension and are reformulated for implementation onto a test-bed AUV. We compare these geometrically computed controls to time and energy optimal controls for the same trajectory which are computed using a previously developed algorithm. Through this comparison we are able to validate our theoretical results based on the experiments conducted using the time and energy efficient strategies.
Resumo:
Aim: To determine whether telephone support using an evidence-based protocol for chronic heart failure (CHF) management will improve patient outcomes and will reduce hospital readmission rates in patients without access to hospital-based management programs. Methods: The rationale and protocol for a cluster-design randomised controlled trial (RCT) of a semi-automated telephone intervention for the management of CHF, the Chronic Heart-failure Assistance by Telephone (CHAT) Study is described. Care is coordinated by trained cardiac nurses located in Heartline, the national call center of the National Heart Foundation of Australia in partnership with patients’ general practitioners (GPs). Conclusions: The CHAT Study model represents a potentially cost-effective and accessible model for the Australian health system in caring for CHF patients in rural and remote areas. The system of care could also be readily adapted for a range of chronic diseases and health systems. Key words: chronic disease management; chronic heart failure; integrated health care systems; nursing care, rural health services; telemedicine; telenursing
Resumo:
Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.
Resumo:
Commonwealth Scientific and Industrial Research Organization (CSIRO) has recently conducted a technology demonstration of a novel fixed wireless broadband access system in rural Australia. The system is based on multi user multiple-input multiple-output orthogonal frequency division multiplexing (MU-MIMO-OFDM). It demonstrated an uplink of six simultaneous users with distances ranging from 10 m to 8.5 km from a central tower, achieving 20 bits s/Hz spectrum efficiency. This paper reports on the analysis of channel capacity and bit error probability simulation based on the measured MUMIMO-OFDM channels obtained during the demonstration, and their comparison with the results based on channels simulated by a novel geometric optics based channel model suitable for MU-MIMO OFDM in rural areas. Despite its simplicity, the model was found to predict channel capacity and bit error rate probability accurately for a typical MU-MIMO-OFDM deployment scenario.
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
Knowledge of cable parameters has been well established but a better knowledge of the environment in which the cables are buried lags behind. Research in Queensland University of Technology has been aimed at obtaining and analysing actual daily field values of thermal resistivity and diffusivity of the soil around power cables. On-line monitoring systems have been developed and installed with a data logger system and buried spheres that use an improved technique to measure thermal resistivity and diffusivity over a short period. Results based on long term continuous field data are given. A probabilistic approach is developed to establish the correlation between the measured field thermal resistivity values and rainfall data from weather bureau records. This data from field studies can reduce the risk in cable rating decisions and provide a basis for reliable prediction of “hot spot” of an existing cable circuit
Resumo:
The concept of older adults contributing to society in a meaningful way has been termed ‘active ageing’. Active ageing reflects changes in prevailing theories of social and psychological aspects of ageing, with a focus on individuals' strengths as opposed to their deficits or pathology. In order to explore predictors of active ageing, the Australian Active Ageing (Triple A) project group undertook a national postal survey of participants over the age of 50 years recruited randomly through their 2004 membership of a large Australia-wide senior's organisation. The survey comprised 178 items covering paid and voluntary work, learning, social, spiritual, emotional, health and home, life events and demographic items. A 45% response rate (2655 returned surveys) reflected an expected balance of gender, age and geographic representation of participants. The data were analysed using data mining techniques to represent generalizations on individual situations. Data mining identifies the valid, novel, potentially useful and understandable patterns and trends in data. The results based on the clustering mining technique indicate that physical and emotional health combined with the desire to learn were the most significant factors when considering active ageing. The findings suggest that remaining active in later life is not only directly related to the maintenance of emotional and physical health, but may be significantly intertwined with the opportunity to engage in on-going learning activities that are relevant to the individual. The findings of this study suggest that practitioners and policy makers need to incorporate older peoples' learning needs within service and policy framework developments.
Resumo:
Retrieving information from Twitter is always challenging due to its large volume, inconsistent writing and noise. Most existing information retrieval (IR) and text mining methods focus on term-based approach, but suffers from the problems of terms variation such as polysemy and synonymy. This problem deteriorates when such methods are applied on Twitter due to the length limit. Over the years, people have held the hypothesis that pattern-based methods should perform better than term-based methods as it provides more context, but limited studies have been conducted to support such hypothesis especially in Twitter. This paper presents an innovative framework to address the issue of performing IR in microblog. The proposed framework discover patterns in tweets as higher level feature to assign weight for low-level features (i.e. terms) based on their distributions in higher level features. We present the experiment results based on TREC11 microblog dataset and shows that our proposed approach significantly outperforms term-based methods Okapi BM25, TF-IDF and pattern based methods, using precision, recall and F measures.
Resumo:
This is the first research focusing on Gold Coast school libraries and teacher- librarians. It presents a detailed picture of library provision and staffing at a representative group of 27 government and non-government schools at the Gold Coast. It shows links between employment of a teacher-librarian and higher NAPLAN reading and writing scores. And it presents the principals’ generally positive views about teacher-librarians’ contribution to reading and literacy at their schools. The findings respond in part to the recent government inquiry’s call (House of Representatives, 2011) for research about the current staffing of school libraries in Australia, and the influence of school libraries and teacher-librarians on students’ literacy and learning outcomes. While the study has focused on a relatively small group of school libraries, it has produced a range of significant outcomes: • An extensive review of international and Australian research showing impacts of school libraries and teacher-librarians on students’ literacy and learning outcomes • Findings consistent with international research showing: - An inverse relationship between lower student to EFT library staff ratio and higher school NAPLAN scores for reading and writing - Schools that employ a teacher-librarian tend to achieve school NAPLAN scores for respective year levels that are higher than the national mean It is anticipated that the study’s findings will be of interest to education authorities, school leadership teams, teacher-librarians, teachers and researchers. The findings provide evidence to: • inform policy development and strategic planning for school libraries that respond to the literacy development needs of 21st century learners • inform school-based management of school libraries • inform curriculum development and teacher-librarian practice • support further collaborative research on a State or national level • enhance conceptual understandings about relationship(s) between school libraries, teacher-librarians and literacy/information literacy development • support advocacy about school libraries, teacher-librarians and their contribution to literacy development and student learning in Australian schools SLAQ President Toni Leigh comments: “It is heartening to see findings which validate the critical role teacher-librarians play in student literacy development and the positive correlation of higher NAPLAN scores and schools with a qualified teacher-librarian. Also encouraging is the high percentage of school principals who recognise the necessity of a well resourced school library and the positive influence of these libraries on student literacy”. This research arises from a research partnership between School Library Association of Queensland (SLAQ) and Children and Youth Research Centre, QUT. Lead researcher: Dr Hilary Hughes, Children and Youth Research Centre, QUT Research assistants: Dr Hossein Bozorgian, Dr Cherie Allan, Dr Michelle Dicinoski, QUT SLAQ Research Reference Group: Toni Leigh, Marj Osborne, Sally Fraser, Chris Kahl and Helen Reynolds Reference: House of Representatives. (2011). School libraries and teacher librarians in 21st century Australia. Canberra: Commonwealth of Australia. http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=ee/schoollibraries/report.htm
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Macroscopic Fundamental Diagram (MFD) has been proved to exist in large urban road and freeway networks by theoretic method and real data in cities. However hysteresis and scatters have also been found existed both on motorway network and urban road. This paper investigates how the incident variables affect the scatter and shape of the MFD using both the simulated data and the real data collected from the Pacific Motorway M3 in Brisbane, Australia. Three key components of incident are investigated based on the simulated data: incident location, incident duration time and traffic demand. Results based on the simulated data indicate that MFD shape is a property not only of the network itself but also of the incident characteristics variables. MFDs for three types of real incidents (crash, hazard and breakdown) are explored separately. The results based on the empirical data are consistent with the simulated results. The hysteresis phenomenon occurs on both the upstream and the downstream of the incident location, but for opposite hysteresis loops. Gradient of the MFD for the upstream is more than that for the downstream on the incident site, when traffic demand is off peak.