941 resultados para Difficult Dialogues
Resumo:
In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short and long terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fiber-optic technologies, fiber Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, using FBG sensors is proposed to develop a simple, inexpensive and practical method to measure vertical displacements of bridges. A curvature approach for vertical displacement measurement using curvature measurements is proposed. In addition, with the successful development of a FBG tilt sensors, an inclination approach is also proposed using inclination measurements. A series of simulation tests of a full-scale bridge was conducted. It shows that both the approaches can be implemented to determine vertical displacements for bridges with various support conditions, varying stiffness (EI) along the spans and without any prior known loading. These approaches can thus measure vertical displacements for most of slab-on-girder and box-girder bridges. Moreover, with the advantages of FBG sensors, they can be implemented to monitor bridge behavior remotely and in real time. Further recommendations of these approaches for developments will also be discussed at the end of the paper.
Resumo:
Overall, computer models and simulations have a rather disappointing record within the management sciences as a tool for predicting the future. Social and market environments can be influenced by an overwhelming number of variables, and it is therefore difficult to use computer models to make forecasts or to test hypotheses concerning the relationship between individual behaviours and macroscopic outcomes. At the same time, however, advocates of computer models argue that they can be used to overcome the human mind's inability to cope with several complex variables simultaneously or to understand concepts that are highly counterintuitive. This paper seeks to bridge the gap between these two perspectives by suggesting that management research can indeed benefit from computer models by using them to formulate fruitful hypotheses.
Resumo:
International law’s capacity to influence state behaviour by regulating recourse to violence has been a longstanding source of debate among international lawyers and political scientists. On the one hand, sceptics assert that frequent violations of the prohibition on the use of force have rendered article 2(4) of the UN Charter redundant. They contend that national self-interest, rather than international law, is the key determinant of state behaviour regarding the use of force. On the other hand, defenders of article 2(4) argue first, that most states comply with the Charter framework, and second, that state rhetoric continues to acknowledge the existence of the jus ad bellum. In particular, the fact that violators go to considerable lengths to offer legal or factual justifications for their conduct – typically by relying on the right of self-defence – is advanced as evidence that the prohibition on the use of force retains legitimacy in the eyes of states. This paper identifies two potentially significant features of state practice since 2006 which may signal a shift in states’ perceptions of the normative authority of article 2(4). The first aspect is the recent failure by several states to offer explicit legal justifications for their use or force, or to report action taken in self-defence to the Security Council in accordance with Article 51. Four incidents linked to the global “war on terror” are examined here: Israeli airstrikes in Syria in 2007 and in Sudan in 2009, Turkey’s 2006-2008 incursions into northern Iraq, and Ethiopia’s 2006 intervention in Somalia. The second, more troubling feature is the international community’s apparent lack of concern over the legality of these incidents. Each use of force is difficult to reconcile with the strict requirements of the jus ad bellum; yet none attracted genuine legal scrutiny or debate among other states. While it is too early to conclude that these relatively minor incidents presage long term shifts in state practice, viewed together the two developments identified here suggest a possible downgrading of the role of international law in discussions over the use of force, at least in conflicts linked to the “war on terror”. This, in turn, may represent a declining perception of the normative authority of the jus ad bellum, and a concomitant admission of the limits of international law in regulating violence.
Resumo:
Measuring the business value that Internet technologies deliver for organisations has proven to be a difficult and elusive task, given their complexity and increased embeddedness within the value chain. Yet, despite the lack of empirical evidence that links the adoption of Information Technology (IT) with increased financial performance, many organisations continue to adopt new technologies at a rapid rate. This is evident in the widespread adoption of Web 2.0 online Social Networking Services (SNSs) such as Facebook, Twitter and YouTube. These new Internet based technologies, widely used for social purposes, are being employed by organisations to enhance their business communication processes. However, their use is yet to be correlated with an increase in business performance. Owing to the conflicting empirical evidence that links prior IT applications with increased business performance, IT, Information Systems (IS), and E-Business Model (EBM) research has increasingly looked to broader social and environmental factors as a means for examining and understanding the broader influences shaping IT, IS and E-Business (EB) adoption behaviour. Findings from these studies suggest that organisations adopt new technologies as a result of strong external pressures, rather than a clear measure of enhanced business value. In order to ascertain if this is the case with the adoption of SNSs, this study explores how organisations are creating value (and measuring that value) with the use of SNSs for business purposes, and the external pressures influencing their adoption. In doing so, it seeks to address two research questions: 1. What are the external pressures influencing organisations to adopt SNSs for business communication purposes? 2. Are SNSs providing increased business value for organisations, and if so, how is that value being captured and measured? Informed by the background literature fields of IT, IS, EBM, and Web 2.0, a three-tiered theoretical framework is developed that combines macro-societal, social and technological perspectives as possible causal mechanisms influencing the SNS adoption event. The macro societal view draws on the concept of Castells. (1996) network society and the behaviour of crowds, herds and swarms, to formulate a new explanatory concept of the network vortex. The social perspective draws on key components of institutional theory (DiMaggio & Powell, 1983, 1991), and the technical view draws from the organising vision concept developed by Swanson and Ramiller (1997). The study takes a critical realist approach, and conducts four stages of data collection and one stage of data coding and analysis. Stage 1 consisted of content analysis of websites and SNSs of many organisations, to identify the types of business purposes SNSs are being used for. Stage 2 also involved content analysis of organisational websites, in order to identify suitable sample organisations in which to conduct telephone interviews. Stage 3 consisted of conducting 18 in-depth, semi-structured telephone interviews within eight Australian organisations from the Media/Publishing and Galleries, Libraries, Archives and Museum (GLAM) industries. These sample organisations were considered leaders in the use of SNSs technologies. Stage 4 involved an SNS activity count of the organisations interviewed in Stage 3, in order to rate them as either Advanced Innovator (AI) organisations, or Learning Focussed (LF) organisations. A fifth stage of data coding and analysis of all four data collection stages was conducted, based on the theoretical framework developed for the study, and using QSR NVivo 8 software. The findings from this study reveal that SNSs have been adopted by organisations for the purpose of increasing business value, and as a result of strong social and macro-societal pressures. SNSs offer organisations a wide range of value enhancing opportunities that have broader benefits for customers and society. However, measuring the increased business value is difficult with traditional Return On Investment (ROI) mechanisms, ascertaining the need for new value capture and measurement rationales, to support the accountability of SNS adoption practices. The study also identified the presence of technical, social and macro-societal pressures, all of which influenced SNS adoption by organisations. These findings contribute important theoretical insight into the increased complexity of pressures influencing technology adoption rationales by organisations, and have important practical implications for practice, by reflecting the expanded global online networks in which organisations now operate. The limitations of the study include the small number of sample organisations in which interviews were conducted, its limited generalisability, and the small range of SNSs selected for the study. However, these were compensated in part by the expertise of the interviewees, and the global significance of the SNSs that were chosen. Future research could replicate the study to a larger sample from different industries, sectors and countries. It could also explore the life cycle of SNSs in a longitudinal study, and map how the technical, social and macro-societal pressures are emphasised through stages of the life cycle. The theoretical framework could also be applied to other social fad technology adoption studies.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of Seven published/submitted papers and one poster presentation, of which five have been published and the other two are under review. This project is financially supported by the QUTPRA Grant. The twenty-first century started with the resurrection of lignocellulosic biomass as a potential substitute for petrochemicals. Petrochemicals, which enjoyed the sustainable economic growth during the past century, have begun to reach or have reached their peak. The world energy situation is complicated by political uncertainty and by the environmental impact associated with petrochemical import and usage. In particular, greenhouse gasses and toxic emissions produced by petrochemicals have been implicated as a significant cause of climate changes. Lignocellulosic biomass (e.g. sugarcane biomass and bagasse), which potentially enjoys a more abundant, widely distributed, and cost-effective resource base, can play an indispensible role in the paradigm transition from fossil-based to carbohydrate-based economy. Poly(3-hydroxybutyrate), PHB has attracted much commercial interest as a plastic and biodegradable material because some its physical properties are similar to those of polypropylene (PP), even though the two polymers have quite different chemical structures. PHB exhibits a high degree of crystallinity, has a high melting point of approximately 180°C, and most importantly, unlike PP, PHB is rapidly biodegradable. Two major factors which currently inhibit the widespread use of PHB are its high cost and poor mechanical properties. The production costs of PHB are significantly higher than for plastics produced from petrochemical resources (e.g. PP costs $US1 kg-1, whereas PHB costs $US8 kg-1), and its stiff and brittle nature makes processing difficult and impedes its ability to handle high impact. Lignin, together with cellulose and hemicellulose, are the three main components of every lignocellulosic biomass. It is a natural polymer occurring in the plant cell wall. Lignin, after cellulose, is the most abundant polymer in nature. It is extracted mainly as a by-product in the pulp and paper industry. Although, traditionally lignin is burnt in industry for energy, it has a lot of value-add properties. Lignin, which to date has not been exploited, is an amorphous polymer with hydrophobic behaviour. These make it a good candidate for blending with PHB and technically, blending can be a viable solution for price and reduction and enhance production properties. Theoretically, lignin and PHB affect the physiochemical properties of each other when they become miscible in a composite. A comprehensive study on structural, thermal, rheological and environmental properties of lignin/PHB blends together with neat lignin and PHB is the targeted scope of this thesis. An introduction to this research, including a description of the research problem, a literature review and an account of the research progress linking the research papers is presented in Chapter 1. In this research, lignin was obtained from bagasse through extraction with sodium hydroxide. A novel two-step pH precipitation procedure was used to recover soda lignin with the purity of 96.3 wt% from the black liquor (i.e. the spent sodium hydroxide solution). The precipitation process is presented in Chapter 2. A sequential solvent extraction process was used to fractionate the soda lignin into three fractions. These fractions, together with the soda lignin, were characterised to determine elemental composition, purity, carbohydrate content, molecular weight, and functional group content. The thermal properties of the lignins were also determined. The results are presented and discussed in Chapter 2. On the basis of the type and quantity of functional groups, attempts were made to identify potential applications for each of the individual lignins. As an addendum to the general section on the development of composite materials of lignin, which includes Chapters 1 and 2, studies on the kinetics of bagasse thermal degradation are presented in Appendix 1. The work showed that distinct stages of mass losses depend on residual sucrose. As the development of value-added products from lignin will improve the economics of cellulosic ethanol, a review on lignin applications, which included lignin/PHB composites, is presented in Appendix 2. Chapters 3, 4 and 5 are dedicated to investigations of the properties of soda lignin/PHB composites. Chapter 3 reports on the thermal stability and miscibility of the blends. Although the addition of soda lignin shifts the onset of PHB decomposition to lower temperatures, the lignin/PHB blends are thermally more stable over a wider temperature range. The results from the thermal study also indicated that blends containing up to 40 wt% soda lignin were miscible. The Tg data for these blends fitted nicely to the Gordon-Taylor and Kwei models. Fourier transform infrared spectroscopy (FT-IR) evaluation showed that the miscibility of the blends was because of specific hydrogen bonding (and similar interactions) between reactive phenolic hydroxyl groups of lignin and the carbonyl group of PHB. The thermophysical and rheological properties of soda lignin/PHB blends are presented in Chapter 4. In this chapter, the kinetics of thermal degradation of the blends is studied using thermogravimetric analysis (TGA). This preliminary investigation is limited to the processing temperature of blend manufacturing. Of significance in the study, is the drop in the apparent energy of activation, Ea from 112 kJmol-1 for pure PHB to half that value for blends. This means that the addition of lignin to PHB reduces the thermal stability of PHB, and that the comparative reduced weight loss observed in the TGA data is associated with the slower rate of lignin degradation in the composite. The Tg of PHB, as well as its melting temperature, melting enthalpy, crystallinity and melting point decrease with increase in lignin content. Results from the rheological investigation showed that at low lignin content (.30 wt%), lignin acts as a plasticiser for PHB, while at high lignin content it acts as a filler. Chapter 5 is dedicated to the environmental study of soda lignin/PHB blends. The biodegradability of lignin/PHB blends is compared to that of PHB using the standard soil burial test. To obtain acceptable biodegradation data, samples were buried for 12 months under controlled conditions. Gravimetric analysis, TGA, optical microscopy, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), FT-IR, and X-ray photoelectron spectroscopy (XPS) were used in the study. The results clearly demonstrated that lignin retards the biodegradation of PHB, and that the miscible blends were more resistant to degradation compared to the immiscible blends. To obtain an understanding between the structure of lignin and the properties of the blends, a methanol-soluble lignin, which contains 3× less phenolic hydroxyl group that its parent soda lignin used in preparing blends for the work reported in Chapters 3 and 4, was blended with PHB and the properties of the blends investigated. The results are reported in Chapter 6. At up to 40 wt% methanolsoluble lignin, the experimental data fitted the Gordon-Taylor and Kwei models, similar to the results obtained soda lignin-based blends. However, the values obtained for the interactive parameters for the methanol-soluble lignin blends were slightly lower than the blends obtained with soda lignin indicating weaker association between methanol-soluble lignin and PHB. FT-IR data confirmed that hydrogen bonding is the main interactive force between the reactive functional groups of lignin and the carbonyl group of PHB. In summary, the structural differences existing between the two lignins did not manifest itself in the properties of their blends.
Resumo:
Social outcomes, in particular intangible social outcomes, are generally difficult to achieve in the construction industry due to the predominantly episodic, fragmented and heavily regulated nature of construction that presupposes a tendency towards mainstream construction processes and design. The Western Australian ‘Percent for Art’ policy is recognized for stimulating social outcomes, by creating richer and more aesthetically pleasing social environments through the incorporation of artwork into public buildings. A case study of four Percent for Art projects highlights the role of the Artwork Selection Committee in incorporating artwork into construction. A total of 20 semi-structured interviews were conducted with committee members and policy officers. Data analysis involved a combination of pattern coding and matrix categorization, and resulted in the identification of the committee’s three key elements of collaborative communication, democratic decision-making and project champions. The findings suggest these key elements foster the interaction, communication and relationships needed to facilitate feedback, enhance relationships, create cross-functional teams and lower project resistance, which are all necessary to overcome constraints to social outcomes in construction. The findings provide greater insight into the mechanisms for achieving social outcomes and a basis for future discussion about the processes for achieving social outcomes in the construction industry.
Resumo:
Business postgraduate education is rapidly adopting virtual learning environments to facilitate the needs of a time-poor stakeholder community, where part-time students find it difficult to attend face-to-face classes. Creating engaged, flexible learning opportunities in the virtual world is therefore the current challenge for many business academics. However, in the blended learning environment there is also the added pressure of encouraging these students to develop soft managerial or generic skills such as self-reflection. The current paper provides an overview of an action-research activity exploring the experiences of students who were required to acquire the skills of self-reflection within a blended learning unit dominated by on-line learning delivery. We present the responses of students and the changes made to our teaching and learning activities to improve the facilitation of both our face-to-face delivery as well as the on-line learning environment.
Resumo:
The objective of this thesis is to investigate whether the corporate governance practices adopted by Chinese listed firms are associated with the quality of earnings information. Based on a review of agency and institutional theory, this study develops hypotheses that predict the monitoring effectiveness of the board and the audit committee. Using a combination of univariate and multivariate analyses, the association between corporate governance mechanisms and earnings management are tested from 2004 to 2008. Through analysing the empirical results, a number of findings are summarised as below. First, board independence is weakened by the introduction of government officials as independent directors on the boards. Government officials acting as independent directors, claim that they meet the definition of independent director set by the regulation. However, they have some connection with the State, which is the controlling shareholder in listed SOEs affiliated companies. Consequently, the effect of the independent director’s expertise in constraining earnings management is mitigated as demonstrated by an insignificant association between board expertise and earnings management. An alternative explanation for the inefficiency of board independence may point to the pre-selection of independent directors by the powerful CEO. It is argued that a CEO can manipulate the board composition and choose the "desirable" independent directors to monitor themselves. Second, a number of internal mechanisms, such as board size, board activities, and the separation of the roles of the CEO and chair are found to be significantly associated with discretionary accruals. This result suggests that there are advantages in having a large and active board in the Chinese setting. This can offset the disadvantages associated with large boards, such as increased bureaucracy, and hence, increase the constraining effects of a large and resourceful board. Third, factor analysis identifies two factors: CEO power and board power. CEO power is the factor which consists of CEO duality and turnover, and board power is composed of board size and board activity. The results of CEO power show that if a Chinese listed company has CEO duality and turnover at the same time, it is more likely to have a high level of earnings management. The significant and negative relationship between board power and accruals indicate that large boards with frequent meetings can be associated with low level of earnings management. Overall, the factor analysis suggests that certain governance mechanisms complement each other to become more efficient monitors of opportunistic earnings management. A combination of board characteristics can increase the negative association with earnings management. Fourth, the insignificant results between audit committees and earnings management in Chinese listed firms suggests that the Chinese regulator should strengthen the audit committee functions. This thesis calls for listed firms to disclose more information on audit committee composition and activities, which can facilitate future research on the Chinese audit committee’s monitoring role. Fifth, the interactive results between State ownership and board characteristics show that dominant State ownership has a moderating effect on board monitoring power as the State totally controls 42% of the issued shares. The high percentage of State ownership makes it difficult for the non-controlling institutional shareholders to challenge the State’s dominant status. As a result, the association between non-controlling institutional ownership and earnings management is insignificant in most situations. Lastly, firms audited by the international Big4 have lower abnormal accruals than firms audited by domestic Chinese audit firms. In addition, the inverse U-shape relationship between audit tenure and earnings quality demonstrates the changing effects of audit quality after a certain period of appointment. Furthermore, this thesis finds that listing in Hong Kong Stock Exchanges can be an alternative governance mechanism to discipline Chinese firms to follow strict Hong Kong listing requirements. Management of Hong Kong listed companies are exposed to the scrutiny of international investors and Hong Kong regulators. This in turn reduces their chances of conducting self-interested earnings manipulation. This study is designed to fill the gap in governance literature in China that is related to earnings management. Previous research on corporate governance mechanisms and earnings management in China is not conclusive. The current research builds on previous literature and provides some meaningful implications for practitioners, regulators, academic, and international investors who have investment interests in a transitional country. The findings of this study contribute to corporate governance and earnings management literature in the context of the transitional economy of China. The use of alternative measures for earnings management yields similar results compared with the accruals models and produces additional findings.
Resumo:
In this paper I examine the recent arguments by Charles Foster, Jonathan Herring, Karen Melham and Tony Hope against the utility of the doctrine of double effect. One basis on which they reject the utility of the doctrine is their claim that it is notoriously difficult to apply what they identify as its 'core' component, namely, the distinction between intention and foresight. It is this contention that is the primarily focus of my article. I argue against this claim that the intention/foresight distinction remains a fundamental part of the law in those jurisdictions where intention remains an element of the offence of murder and that, accordingly, it is essential ro resolve the putative difficulties of applying the intention/foresight distinction so as to ensure the integrity of the law of murder. I argue that the main reasons advanced for the claim that the intention/foresight distinction is difficult to apply are ultimately unsustainable, and that the distinction is not as difficult to apply as the authors suggest.
Resumo:
High fidelity simulation as a teaching and learning approach is being embraced by many schools of nursing. Our school embarked on integrating high fidelity (HF) simulation into the undergraduate clinical education program in 2011. Low and medium fidelity simulation has been used for many years, but this did not simplify the integration of HF simulation. Alongside considerations of how and where HF simulation would be integrated, issues arose with: student consent and participation for observed activities; data management of video files; staff development, and conceptualising how methods for student learning could be researched. Simulation for undergraduate student nurses commenced as a formative learning activity, undertaken in groups of eight, where four students undertake the ‘doing’ role and four are structured observers, who then take a formal role in the simulation debrief. Challenges for integrating simulation into student learning included conceptualising and developing scenarios to trigger students’ decision making and application of skills, knowledge and attitudes explicit to solving clinical ‘problems’. Developing and planning scenarios for students to ‘try out’ skills and make decisions for problem solving lay beyond choosing pre-existing scenarios inbuilt with the software. The supplied scenarios were not concept based but rather knowledge, skills and technology (of the manikin) focussed. Challenges lay in using the technology for the purpose of building conceptual mastery rather than using technology simply because it was available. As we integrated use of HF simulation into the final year of the program, focus was on building skills, knowledge and attitudes that went beyond technical skill, and provided an opportunity to bridge the gap with theory-based knowledge that students often found difficult to link to clinical reality. We wished to provide opportunities to develop experiential knowledge based on application and clinical reasoning processes in team environments where problems are encountered, and to solve them, the nurse must show leadership and direction. Other challenges included students consenting for simulations to be videotaped and ethical considerations of this. For example if one student in a group of eight did not consent, did this mean they missed the opportunity to undertake simulation, or that others in the group may be disadvantaged by being unable to review their performance. This has implications for freely given consent but also for equity of access to learning opportunities for students who wished to be taped and those who did not. Alongside this issue were the details behind data management, storage and access. Developing staff with varying levels of computer skills to use software and undertake a different approach to being the ‘teacher’ required innovation where we took an experiential approach. Considering explicit learning approaches to be trialled for learning was not a difficult proposition, but considering how to enact this as research with issues of blinding, timetabling of blinded groups, and reducing bias for testing results of different learning approaches along with gaining ethical approval was problematic. This presentation presents examples of these challenges and how we overcame them.
Resumo:
Purpose: Communication is integral to effective trauma care provision. This presentation will report on barriers to meaningful information transfer for multi-trauma patients upon discharge from the Emergency Department (ED) to the care areas of Intensive Care Unit, High Dependency Unit, and Perioperative Services. This is an ongoing study at one tertiary level hospital in Queensland. Method: This is a multi-phase, mixed method study. In Phase 1 data were collected about information transfer. This Phase was initially informed by a comprehensive literature review, then via focus groups, chart audit, staff survey and review of national and international trauma forms. Results: The barriers identified related to nursing handover, documented information, time inefficiency, patient complexity and stability and time of transfer. Specifically this included differences in staff expectations and variation in the nursing handover processes, no agreed minimum dataset of information handed over, missing, illegible or difficult to find information in documentation (both medical and nursing), low compliance with some forms used for documentation. Handover of these patients is complex with information coming from many sources, dealing with issues is more difficult for these patients when transferred out of hours. Conclusions and further directions: This study investigated the current communication processes and standards of information transfer to identify barriers and issues. The barriers identified were the structure used for documentation, processes used (e.g. handover), patient acuity and time. This information is informing the development, implementation and evaluation of strategies to ameliorate the issues identified.
Resumo:
There is a lack of writing on the issue of the education rights of people with disabilities by authors of any theoretical persuasion. While the deficiency of theory may be explained by a variety of historical, philosophical and practical considerations, it is a deficiency which must be addressed. Otherwise, any statement of rights rings out as hollow rhetoric unsupported by sound reason and moral rectitude. This paper attempts to address this deficiency in education rights theory by postulating a communitarian theory of the education rights of people with disabilities. The theory is developed from communitarian writings on the role of education in democratic society. The communitarian school, like the community within which it nests, is inclusive. Schools both reflect and model the shape of communitarian society and have primary responsibility for teaching the knowledge and virtues which will allow citizens to belong to and function within society. Communitarians emphasise responsibilities, however, as the corollary of rights and require the individual good to yield to community good when the hard cases arise. The article not only explains the basis of the right to an inclusive education, therefore, but also engages with the difficult issue of when such a right may not be enforceable.
Resumo:
Context: Parliamentary committees established in Westminster parliaments, such as Queensland, provide a cross-party structure that enables them to recommend policy and legislative changes that may otherwise be difficult for one party to recommend. The overall parliamentary committee process tends to be more cooperative and less adversarial than the main chamber of parliament and, as a result, this process permits parliamentary committees to make recommendations more on the available research evidence and less on political or party considerations. Objectives: This paper considers the contributions that parliamentary committees in Queensland have made in the past in the areas of road safety, drug use as well as organ and tissue donation. The paper also discusses the importance of researchers actively engaging with parliamentary committees to ensure the best evidence based policy outcomes. Key messages: In the past, parliamentary committees have successfully facilitated important safety changes with many committee recommendations based on research results. In order to maximise the benefits of the parliamentary committee process it is essential that researchers inform committees about their work and become key stakeholders in the inquiry process. Researchers can keep committees informed by making submissions to their inquiries, responding to requests for information and appearing as witnesses at public hearings. Researchers should emphasise the key findings and implications of their research as well as considering the jurisdictional implications and political consequences. It is important that researchers understand the differences between lobbying and providing informed recommendations when interacting with committees. Discussion and conclusions: Parliamentary committees in Queensland have successfully assisted in the introduction of evidence based policy and legislation. In order to present best practice recommendations, committees rely on the evidence presented to them including the results of researchers. Actively engaging with parliamentary committees will help researchers to turn their results into practice with a corresponding decrease in injuries and fatalities. Developing an understanding of parliamentary committees, and the typical inquiry process used by these committees, will help researchers to present their research results in a manner that will encourage the adoption of their ideas by parliamentary committees, the presentation of these results as recommendations within the report and the subsequent enactment of the committee’s recommendations by the government.
Resumo:
Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.
Resumo:
The great majority of police officers are committed to honourable and competent public service and consistently demonstrate integrity and accountability in carrying out the often difficult, complex and sometimes dangerous, activities involved in policing by consent. However, in every police agency there exists an element of dishonesty, lack of professionalism and criminal behaviour. This article is based on archival research of criminal behaviour in the Norwegian police force. A total of 60 police employees were prosecuted in court because of misconduct and crime from 2005 to 2010. Court cases were coded as two potential predictors of court sentence in terms of imprisonment days, ie, type of deviance and level of deviance. Categories of police crime and levels were organised according to a conceptual framework developed for assessing and managing police deviance. Empirical findings support the hypothesis that as the seriousness of police crime increases in breadth and depth so also does the severity of the court sentence as measured by time in prison.