266 resultados para ecologies-of-knowledge


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The reliability of Critical Infrastructure is considered to be a fundamental expectation of modern societies. These large-scale socio-technical systems have always, due to their complex nature, been faced with threats challenging their ongoing functioning. However, increasing uncertainty in addition to the trend of infrastructure fragmentation has made reliable service provision not only a key organisational goal, but a major continuity challenge: especially given the highly interdependent network conditions that exist both regionally and globally. The notion of resilience as an adaptive capacity supporting infrastructure reliability under conditions of uncertainty and change has emerged as a critical capacity for systems of infrastructure and the organisations responsible for their reliable management. This study explores infrastructure reliability through the lens of resilience from an organisation and system perspective using two recognised resilience-enhancing management practices, High Reliability Theory (HRT) and Business Continuity Management (BCM) to better understand how this phenomenon manifests within a partially fragmented (corporatised) critical infrastructure industry – The Queensland Electricity Industry. The methodological approach involved a single case study design (industry) with embedded sub-units of analysis (organisations), utilising in-depth interviews and document analysis to illicit findings. Derived from detailed assessment of BCM and Reliability-Enhancing characteristics, findings suggest that the industry as a whole exhibits resilient functioning, however this was found to manifest at different levels across the industry and in different combinations. Whilst there were distinct differences in respect to resilient capabilities at the organisational level, differences were less marked at a systems (industry) level, with many common understandings carried over from the pre-corporatised operating environment. These Heritage Factors were central to understanding the systems level cohesion noted in the work. The findings of this study are intended to contribute to a body of knowledge encompassing resilience and high reliability in critical infrastructure industries. The research also has value from a practical perspective, as it suggests a range of opportunities to enhance resilient functioning under increasingly interdependent, networked conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Throughout history, developments in medicine have aimed to improve patient quality of life, and reduce the trauma associated with surgical treatment. Surgical access to internal organs and bodily structures has been traditionally via large incisions. Endoscopic surgery presents a technique for surgical access via small (1 Omm) incisions by utilising a scope and camera for visualisation of the operative site. Endoscopy presents enormous benefits for patients in terms of lower post operative discomfort, and reduced recovery and hospitalisation time. Since the first gall bladder extraction operation was performed in France in 1987, endoscopic surgery has been embraced by the international medical community. With the adoption of the new technique, new problems never previously encountered in open surgery, were revealed. One such problem is that the removal of large tissue specimens and organs is restricted by the small incision size. Instruments have been developed to address this problem however none of the devices provide a totally satisfactory solution. They have a number of critical weaknesses: -The size of the access incision has to be enlarged, thereby compromising the entire endoscopic approach to surgery. - The physical quality of the specimen extracted is very poor and is not suitable to conduct the necessary post operative pathological examinations. -The safety of both the patient and the physician is jeopardised. The problem of tissue and organ extraction at endoscopy is investigated and addressed. In addition to background information covering endoscopic surgery, this thesis describes the entire approach to the design problem, and the steps taken before arriving at the final solution. This thesis contributes to the body of knowledge associated with the development of endoscopic surgical instruments. A new product capable of extracting large tissue specimens and organs in endoscopy is the final outcome of the research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two perceptions of the marginality of home economics are widespread across educational and other contexts. One is that home economics and those who engage in its pedagogy are inevitably marginalised within patriarchal relations in education and culture. This is because home economics is characterised as women's knowledge, for the private domain of the home. The other perception is that only orthodox epistemological frameworks of inquiry should be used to interrogate this state of affairs. These perceptions have prompted leading theorists in the field to call for non-essentialist approaches to research in order to re-think the thinking that has produced this cul-de-sac positioning of home economics as a body of knowledge and a site of teacher practice. This thesis takes up the challenge of working to locate a space outside the frame of modernist research theory and methods, recognising that this shift in epistemology is necessary to unsettle the idea that home economics is inevitably marginalised. The purpose of the study is to reconfigure how we have come to think about home economics teachers and the profession of home economics as a site of cultural practice, in order to think it otherwise (Lather, 1991). This is done by exploring how the culture of home economics is being contested from within. To do so, the thesis uses a 'posthumanist' approach, which rejects the conception of the individual as a unitary and fixed entity, but instead as a subject in process, shaped by desires and language which are not necessarily consciously determined. This posthumanist project focuses attention on pedagogical body subjects as the 'unsaid' of home economics research. It works to transcend the modernist dualism of mind/body, and other binaries central to modernist work, including private/public, male/female,paid/unpaid, and valued/unvalued. In so doing, it refuses the simple margin/centre geometry so characteristic of current perceptions of home economics itself. Three studies make up this work. Studies one and two serve to document the disciplined body of home economics knowledge, the governance of which works towards normalisation of the 'proper' home economics teacher. The analysis of these accounts of home economics teachers by home economics teachers, reveals that home economics teachers are 'skilled' yet they 'suffer' for their profession. Further,home economics knowledge is seen to be complicit in reinforcing the traditional roles of masculinity and femininity, thereby reinforcing heterosexual normativity which is central to patriarchal society. The third study looks to four 'atypical'subjects who defy the category of 'proper' and 'normal' home economics teacher. These 'atypical' bodies are 'skilled' but fiercely reject the label of 'suffering'. The discussion of the studies is a feminist poststructural account, using Russo's (1994) notion of the grotesque body, which is emergent from Bakhtin's (1968) theory of the carnivalesque. It draws on the 'shreds' of home economics pedagogy,scrutinising them for their subversive, transformative potential. In this analysis, the giving and taking of pleasure and fun in the home economics classroom presents moments of surprise and of carnival. Foucault's notion of the construction of the ethical individual shows these 'atypical' bodies to be 'immoderate' yet striving hard to be 'continent' body subjects. This research captures moments of transgression which suggest that transformative moments are already embodied in the pedagogical practices of home economics teachers, and these can be 'seen' when re-looking through postmodemist lenses. Hence, the cultural practices ofhome economics as inevitably marginalised are being contested from within. Until now, home economics as a lived culture has failed to recognise possibilities for reconstructing its own field beyond the confines of modernity. This research is an example of how to think about home economics teachers and the profession as a reconfigured cultural practice. Future research about home economics as a body of knowledge and a site of teacher practice need not retell a simple story of oppression. Using postmodemist epistemologies is one way to provide opportunities for new ways of looking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is the result of an investigation of a Queensland example of curriculum reform based on outcomes, a type of reform common to many parts of the world during the last decade. The purpose of the investigation was to determine the impact of outcomes on teacher perspectives of professional practice. The focus was chosen to permit investigation not only of changes in behaviour resulting from the reform but also of teachers' attitudes and beliefs developed during implementation. The study is based on qualitative methodology, chosen because of its suitability for the investigation of attitudes and perspectives. The study exploits the researcher's opportunities for prolonged, direct contact with groups of teachers through the selection of an over-arching ethnography approach, an approach designed to capture the holistic nature of the reform and to contextualise the data within a broad perspective. The selection of grounded theory as a basis for data analysis reflects the open nature of this inquiry and demonstrates the study's constructivist assumptions about the production of knowledge. The study also constitutes a multi-site case study by virtue of the choice of three individual school sites as objects to be studied and to form the basis of the report. Three primary school sites administered by Brisbane Catholic Education were chosen as the focus of data collection. Data were collected from three school sites as teachers engaged in the first year of implementation of Student Performance Standards, the Queensland version of English outcomes based on the current English syllabus. Teachers' experience of outcomes-driven curriculum reform was studied by means of group interviews conducted at individual school sites over a period of fourteen months, researcher observations and the collection of artefacts such as report cards. Analysis of data followed grounded theory guidelines based on a system of coding. Though classification systems were not generated prior to data analysis, the labelling of categories called on standard, non-idiosyncratic terminology and analytic frames and concepts from existing literature wherever practicable in order to permit possible comparisons with other related research. Data from school sites were examined individually and then combined to determine teacher understandings of the reform, changes that have been made to practice and teacher responses to these changes in terms of their perspectives of professionalism. Teachers in the study understood the reform as primarily an accountability mechanism. Though teachers demonstrated some acceptance of the intentions of the reform, their responses to its conceptualisation, supporting documentation and implications for changing work practices were generally characterised by reduced confidence, anger and frustration. Though the impact of outcomes-based curriculum reform must be interpreted through the inter-relationships of a broad range of elements which comprise teachers' work and their attitudes towards their work, it is proposed that the substantive findings of the study can be understood in terms of four broad themes. First, when the conceptual design of outcomes did not serve teachers' accountability requirements and outcomes were perceived to be expressed in unfamiliar technical language, most teachers in the study lost faith in the value of the reform and lost confidence in their own abilities to understand or implement it. Second, this reduction of confidence was intensified when the scope of outcomes was outside the scope of the teachers' existing curriculum and assessment planning and teachers were confronted with the necessity to include aspects of syllabuses or school programs which they had previously omitted because of a lack of understanding or appreciation. The corollary was that outcomes promoted greater syllabus fidelity when frameworks were closely aligned. Third, other benefits the teachers associated with outcomes included the development of whole school curriculum resources and greater opportunity for teacher collaboration, particularly among schools. The teachers, however, considered a wide range of factors when determining the overall impact of the reform, and perceived a number of them in terms of the costs of implementation. These included the emergence of ethical dilemmas concerning relationships with students, colleagues and parents, reduced individual autonomy, particularly with regard to the selection of valued curriculum content and intensification of workload with the capacity to erode the relationships with students which teachers strongly associated with the rewards of their profession. Finally, in banding together at the school level to resist aspects of implementation, some teachers showed growing awareness of a collective authority capable of being exercised in response to top-down reform. These findings imply that Student Performance Standards require review and, additional implementation resourcing to support teachers through times of reduced confidence in their own abilities. Outcomes prove an effective means of high-fidelity syllabus implementation, and, provided they are expressed in an accessible way and aligned with syllabus frameworks and terminology, should be considered for inclusion in future syllabuses across a range of learning areas. The study also identifies a range of unintended consequences of outcomes-based curriculum and acknowledges the complexity of relationships among all the aspects of teachers' work. It also notes that the impact of reform on teacher perspectives of professional practice may alter teacher-teacher and school-system relationships in ways that have the potential to influence the effectiveness of future curriculum reform.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating “properly” and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's “Windows” operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft “Windows 2000” operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the “UNIX” and “Linux” operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the “Windows 2000” system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A one year mathematics project that focused on measurement was conducted with six Torres Strait Islander schools and communities. Its key focus was to contextualise the teaching and learning of measurement within the students’ culture, communities and home languages. There were six teachers and two teacher aides who participated in the project. This paper reports on the findings from the teachers’ and teacher aides’ survey questionnaire used in the first Professional Development session to identify: a) teachers’ experience of teaching in Torres Strait Islands, b) teachers’ beliefs about effective ways to teach Torres Strait Islander students, and c) contexualising measurement within Torres Strait Islander culture, Communities and home languages. A wide range of differing levels of knowledge and understanding about how to contextualise measurement to support student learning were identified and analysed. For example, an Indigenous teacher claimed that mathematics and the environment are relational, that is, they are not discrete and in isolation from one another, rather they interconnect with mathematical ideas emerging from the environment of the Torres Strait Communities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An essential challenge for organizations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. Using three case examples, this paper explores how Enterprise 2.0 technologies achieve such goals, allowing for the transfer of knowledge by tapping into the tacit and explicit knowledge of disparate groups in complex engineering organizations. The paper is intended to be a timely introduction to the benefits and issues associated with the use of Enterprise 2.0 technologies with the aim of achieving the positive outcomes associated with knowledge management

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Sun exposure is the main source of vitamin D. Increasing scientific and media attention to the potential health benefits of sun exposure may lead to changes in sun exposure behaviors. Methods: To provide data that might help frame public health messages, we conducted an online survey among office workers in Brisbane, Australia, to determine knowledge and attitudes about vitamin D and associations of these with sun protection practices. Of the 4,709 people invited to participate, 2,867 (61%) completed the questionnaire. This analysis included 1,971 (69%) participants who indicated that they had heard about vitamin D. Results: Lack of knowledge about vitamin D was apparent. Eighteen percent of people were unaware of the bone benefits of vitamin D but 40% listed currently unconfirmed benefits. Over half of the participants indicated that more than 10 minutes in the sun was needed to attain enough vitamin D in summer, and 28% indicated more than 20 minutes in winter. This was significantly associated with increased time outdoors and decreased sunscreen use. People believing sun protection might cause vitamin D deficiency (11%) were less likely to be frequent sunscreen users (summer odds ratio, 0.63; 95% confidence interval, 0.52-0.75). Conclusions: Our findings suggest that there is some confusion about sun exposure and vitamin D, and that this may result in reduced sun-protective behavior. Impact: More information is needed about vitamin D production in the skin. In the interim, education campaigns need to specifically address the vitamin D issue to ensure that skin cancer incidence does not increase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is increasingly understood that learning and thus innovation often occurs via highly interactive, iterative, network-based processes. Simultaneously, economic development policy is increasingly focused on small and medium-sized enterprises (SMEs) as a means of generating growth, creating a clear research issue in terms of the roles and interactions of government policy, universities, and other sources of knowledge, SMEs, and the creation and dissemination of innovation. This paper analyses the contribution of a range of actors in an SME innovation creation and dissemination framework, reviewing the role of various institutions therein, exploring the contribution of cross-locality networks, and identifying the mechanisms required to operationalise such a framework. Bivariate and multivariate (regression) techniques are employed to investigate both innovation and growth outcomes in relation to these structures; data are derived from the survey responses of over 450 SMEs in the UK. Results are complex and dependent upon the nature of institutions involved, the type of knowledge sought, and the spatial level of the linkages in place but overall highlight the value of cross-locality networks, network governance structures, and certain spillover effects from universities. In general, we find less support for the factors predicting SME growth outcomes than is the case for innovation. Finally, we outline an agenda for further research in the area.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The following paper explores the use of collaborative pedagogical approaches to advance foundational architectural design education, by linking design process to sustainable technology principles. After a brief discussion on architectural design education, the mentioned collaborative approach is described. This approach facilitates students’ exchange of knowledge between two courses, despite no explicit/assessable requirement to do so. The result for the students is deeper learning and a design process that is enriched through collaboration with sustainable technology. The success of this approach has been measured through questionnaires, evaluation surveys, and a comparative assessment of students common to both courses. The paper focuses on the challenges and innovations in connecting architectural design and technology education, where students are encouraged to implement lessons learnt, thereby closing the gap that these courses have traditionally represented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the era of knowledge economy, cities and regions have started increasingly investing on their physical, social and knowledge infrastructures so as to foster, attract and retain global talent and investment. Knowledge-based urban development as a new paradigm in urban planning and development is being implemented across the globe in order to increase the competitiveness of cities and regions. This chapter provides an overview of the lessons from Multimedia Super Corridor, Malaysia as one of the first large scale manifestations of knowledge-based urban development in South East Asia. The chapter investigates the application of the knowledge-based urban development concept within the Malaysian context, and, particularly, scrutinises the development and evolution of Multimedia Super Corridor by focusing on strategies, implementation policies, infrastructural implications, and agencies involved in the development and management of the corridor. In the light of the literature and case findings, the chapter provides generic recommendations, on the orchestration of knowledge-based urban development, for other cities and regions seeking such development.