915 resultados para DETERMINING CAPSULAR SEROTYPES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The focus of this thesis is discretionary work effort, that is, work effort that is voluntary, is above and beyond what is minimally required or normally expected to avoid reprimand or dismissal, and is organisationally functional. Discretionary work effort is an important construct because it is known to affect individual performance as well as organisational efficiency and effectiveness. To optimise organisational performance and ensure their long term competitiveness and sustainability, firms need to be able to induce their employees to work at or near their peak level. To work at or near their peak level, individuals must be willing to supply discretionary work effort. Thus, managers need to understand the determinants of discretionary work effort. Nonetheless, despite many years of scholarly investigation across multiple disciplines, considerable debate still exists concerning why some individuals supply only minimal work effort whilst others expend effort well above and beyond what is minimally required of them (Le. they supply discretionary work effort). Even though it is well recognised that discretionary work effort is important for promoting organisational performance and effectiveness, many authors claim that too little is being done by managers to increase the discretionary work effort of their employees. In this research, I have adopted a multi-disciplinary approach towards investigating the role of monetary and non-monetary work environment characteristics in determining discretionary work effort. My central research questions were "What non-monetary work environment characteristics do employees perceive as perks (perquisites) and irks (irksome work environment characteristics)?" and "How do perks, irks and monetary rewards relate to an employee's level of discretionary work effort?" My research took a unique approach in addressing these research questions. By bringing together the economics and organisational behaviour (OB) literatures, I identified problems with the current definition and conceptualisations of the discretionary work effort construct. I then developed and empirically tested a more concise and theoretically-based definition and conceptualisation of this construct. In doing so, I disaggregated discretionary work effort to include three facets - time, intensity and direction - and empirically assessed if different classes of work environment characteristics have a differential pattern of relationships with these facets. This analysis involved a new application of a multi-disciplinary framework of human behaviour as a tool for classifying work environment characteristics and the facets of discretionary work effort. To test my model of discretionary work effort, I used a public sector context in which there has been limited systematic empirical research into work motivation. The program of research undertaken involved three separate but interrelated studies using mixed methods. Data on perks, irks, monetary rewards and discretionary work effort were gathered from employees in 12 organisations in the local government sector in Western Australia. Non-monetary work environment characteristics that should be associated with discretionary work effort were initially identified through a review of the literature. Then, a qualitative study explored what work behaviours public sector employees perceive as discretionary and what perks and irks were associated with high and low levels of discretionary work effort. Next, a quantitative study developed measures of these perks and irks. A Q-sorttype procedure and exploratory factor analysis were used to develop the perks and irks measures. Finally, a second quantitative study tested the relationships amongst perks, irks, monetary rewards and discretionary work effort. Confirmatory factor analysis was firstly used to confirm the factor structure of the measurement models. Correlation analysis, regression analysis and effect-size correlation analysis were used to test the hypothesised relationships in the proposed model of discretionary work effort. The findings confirmed five hypothesised non-monetary work environment characteristics as common perks and two of three hypothesised non-monetary work environment characteristics as common irks. Importantly, they showed that perks, irks and monetary rewards are differentially related to the different facets of discretionary work effort. The convergent and discriminant validities of the perks and irks constructs as well as the time, intensity and direction facets of discretionary work effort were generally confirmed by the research findings. This research advances the literature in several ways: (i) it draws on the Economics and OB literatures to redefine and reconceptualise the discretionary work effort construct to provide greater definitional clarity and a more complete conceptualisation of this important construct; (ii) it builds on prior research to create a more comprehensive set of perks and irks for which measures are developed; (iii) it develops and empirically tests a new motivational model of discretionary work effort that enhances our understanding of the nature and functioning of perks and irks and advances our ability to predict discretionary work effort; and (iv) it fills a substantial gap in the literature on public sector work motivation by revealing what work behaviours public sector employees perceive as discretionary and what work environment characteristics are associated with their supply of discretionary work effort. Importantly, by disaggregating discretionary work effort this research provides greater detail on how perks, irks and monetary rewards are related to the different facets of discretionary work effort. Thus, from a theoretical perspective this research also demonstrates the conceptual meaningfulness and empirical utility of investigating the different facets of discretionary work effort separately. From a practical perspective, identifying work environment factors that are associated with discretionary work effort enhances managers' capacity to tap this valuable resource. This research indicates that to maximise the potential of their human resources, managers need to address perks, irks and monetary rewards. It suggests three different mechanisms through which managers might influence discretionary work effort and points to the importance of training for both managers and non-managers in cultivating positive interpersonal relationships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is not a single, coherent, jurisprudence for civil society organisations. Pressure for a clearly enuciated body of law applying to the whole of this sector of society continues to increase. The rise of third sector scholarship, the retreat of the welfare state, the rediscovery of the concept of civil society and pressures to strengthen social capital have all contributed to an ongoing stream of inquiry into the laws that regulate and favour civil society organisations. There have been almost thirty inquiries over the last sixty years into the doctrine of charitable purpose in common law countries. Those inquiries have established that problems with the law applying to civil society organisations are rooted in the common law adopting a ‘technical’ definition of charitable purpose and the failure of this body of law to develop in response to societal changes. Even though it is now well recognised that problems with law reform stem from problems inherent in the doctrine of charitable purpose, statutory reforms have merely ‘bolted on’ additions to the flawed ‘technical’ definition. In this way the scope of operation of the law has been incrementally expanded to include a larger number of civil society organisations. This piecemeal approach continues the exclusion of most civil society organisations from the law of charities discourse, and fails to address the underlying jurisprudential problems. Comprehensive reform requires revisiting the foundational problems embedded in the doctrine of charitable purpose, being informed by recent scholarship, and a paradigm shift that extends the doctrine to include all civil society organisations. Scholarly inquiry into civil society organisations, particularly from within the discipline of neoclassical economics, has elucidated insights that can inform legal theory development. This theory development requires decoupling the two distinct functions performed by the doctrine of charitable purpose which are: setting the scope of regulation, and determining entitlement to favours, such as tax exemption. If the two different functions of the doctrine are considered separately in the light of theoretical insights from other disciplines, the architecture for a jurisprudence emerges that facilitates regulation, but does not necessarily favour all civil society organisations. Informed by that broader discourse it is argued that when determining the scope of regulation, civil society organisations are identified by reference to charitable purposes that are not technically defined. These charitable purposes are in essence purposes which are: Altruistic, for public Benefit, pursued without Coercion. These charitable puposes differentiate civil society organisations from organisations in the three other sectors namely; Business, which is manifest in lack of altruism; Government, which is characterised by coercion; and Family, which is characterised by benefits being private not public. When determining entitlement to favour, it is theorised that it is the extent or nature of the public benefit evident in the pursuit of a charitable purpose that justifies entitlement to favour. Entitlement to favour based on the extent of public benefit is the theoretically simpler – the greater the public benefit the greater the justification for favour. To be entitled to favour based on the nature of a purpose being charitable the purpose must fall within one of three categories developed from the first three heads of Pemsel’s case (the landmark categorisation case on taxation favour). The three categories proposed are: Dealing with Disadvantage, Encouraging Edification; and Facilitating Freedom. In this alternative paradigm a recast doctrine of charitable purpose underpins a jurisprudence for civil society in a way similar to the way contract underpins the jurisprudence for the business sector, the way that freedom from arbitrary coercion underpins the jurisprudence of the government sector and the way that equity within families underpins succession and family law jurisprudence for the family sector. This alternative architecture for the common law, developed from the doctrine of charitable purpose but inclusive of all civil society purposes, is argued to cover the field of the law applying to civil society organisations and warrants its own third space as a body of law between public law and private law in jurisprudence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plants have been identified as promising expression systems for the commercial production of recombinant proteins. Plant-based protein production or “biofarming” offers a number of advantages over traditional expression systems in terms of scale of production, the capacity for post-translation processing, providing a product free of contaminants and cost effectiveness. A number of pharmaceutically important and commercially valuable proteins, such as antibodies, biopharmaceuticals and industrial enzymes are currently being produced in plant expression systems. However, several challenges still remain to improve recombinant protein yield with no ill effect on the host plant. The ability for transgenic plants to produce foreign proteins at commercially viable levels can be directly related to the level and cell specificity of the selected promoter driving the transgene. The accumulation of recombinant proteins may be controlled by a tissue-specific, developmentally-regulated or chemically-inducible promoter such that expression of recombinant proteins can be spatially- or temporally- controlled. The strict control of gene expression is particularly useful for proteins that are considered toxic and whose expression is likely to have a detrimental effect on plant growth. To date, the most commonly used promoter in plant biotechnology is the cauliflower mosaic virus (CaMV) 35S promoter which is used to drive strong, constitutive transgene expression in most organs of transgenic plants. Of particular interest to researchers in the Centre for Tropical Crops and Biocommodities at QUT are tissue-specific promoters for the accumulation of foreign proteins in the roots, seeds and fruit of various plant species, including tobacco, banana and sugarcane. Therefore this Masters project aimed to isolate and characterise root- and seed-specific promoters for the control of genes encoding recombinant proteins in plant-based expression systems. Additionally, the effects of matching cognate terminators with their respective gene promoters were assessed. The Arabidopsis root promoters ARSK1 and EIR1 were selected from the literature based on their reported limited root expression profiles. Both promoters were analysed using the PlantCARE database to identify putative motifs or cis-acting elements that may be associated with this activity. A number of motifs were identified in the ARSK1 promoter region including, WUN (wound-inducible), MBS (MYB binding site), Skn-1, and a RY core element (seed-specific) and in the EIR1 promoter region including, Skn-1 (seed-specific), Box-W1 (fungal elicitor), Aux-RR core (auxin response) and ABRE (ABA response). However, no previously reported root-specific cis-acting elements were observed in either promoter region. To confirm root specificity, both promoters, and truncated versions, were fused to the GUS reporter gene and the expression cassette introduced into Arabidopsis via Agrobacterium-mediated transformation. Despite the reported tissue-specific nature of these promoters, both upstream regulatory regions directed constitutive GUS expression in all transgenic plants. Further, similar levels of GUS expression from the ARSK1 promoter were directed by the control CaMV 35S promoter. The truncated version of the EIR1 promoter (1.2 Kb) showed some differences in the level of GUS expression compared to the 2.2 Kb promoter. Therefore, this suggests an enhancer element is contained in the 2.2 Kb upstream region that increases transgene expression. The Arabidopsis seed-specific genes ATS1 and ATS3 were selected from the literature based on their seed-specific expression profiles and gene expression confirmed in this study as seed-specific by RT-PCR analysis. The selected promoter regions were analysed using the PlantCARE database in order to identify any putative cis elements. The seed-specific motifs GCN4 and Skn-1 were identified in both promoter regions that are associated with elevated expression levels in the endosperm. Additionaly, the seed-specific RY element and the ABRE were located in the ATS1 promoter. Both promoters were fused to the GUS reporter gene and used to transform Arabidopsis plants. GUS expression from the putative promoters was consitutive in all transgenic Arabidopsis tissue tested. Importantly, the positive control FAE1 seed-specific promoter also directed constitutive GUS expression throughout transgenic Arabidopsis plants. The constitutive nature seen in all of the promoters used in this study was not anticipated. While variations in promoter activity can be caused by a number of influencing factors, the variation in promoter activity observed here would imply a major contributing factor common to all plant expression cassettes tested. All promoter constructs generated in this study were based on the binary vector pCAMBIA2300. This vector contains the plant selection gene (NPTII) under the transcriptional control of the duplicated CaMV 35S promoter. This CaMV 35S promoter contains two enhancer domains that confer strong, constitutive expression of the selection gene and is located immediately upstream of the promoter-GUS fusion. During the course of this project, Yoo et al. (2005) reported that transgene expression is significantly affected when the expression cassette is located on the same T-DNA as the 35S enhancer. It was concluded, the trans-acting effects of the enhancer activate and control transgene expression causing irregular expression patterns. This phenomenon seems the most plausible reason for the constitutive expression profiles observed with the root- and seed-specific promoters assessed in this study. The expression from some promoters can be influenced by their cognate terminator sequences. Therefore, the Arabidopsis ARSK1, EIR1, ATS1 and ATS3 terminator sequences were isolated and incorporated into expression cassettes containing the GUS reporter gene under the control of their cognate promoters. Again, unrestricted GUS activity was displayed throughout transgenic plants transformed with these reporter gene fusions. As previously discussed constitutive GUS expression was most likely due to the trans-acting effect of the upstream CaMV 35S promoter in the selection cassette located on the same T-DNA. The results obtained in this study make it impossible to assess the influence matching terminators with their cognate promoters have on transgene expression profiles. The obvious future direction of research continuing from this study would be to transform pBIN-based promoter-GUS fusions (ie. constructs containing no CaMV 35S promoter driving the plant selection gene) into Arabidopsis in order to determine the true tissue specificity of these promoters and evaluate the effects of their cognate 3’ terminator sequences. Further, promoter truncations based around the cis-elements identified here may assist in determining whether these motifs are in fact involved in the overall activity of the promoter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research on particle size distributions and particle concentrations near a busy road cannot be explained by the conventional mechanisms for particle evolution of combustion aerosols. Specifically they appear to be inadequate to explain the experimental observations of particle transformation and the evolution of the total number concentration. This resulted in the development of a new mechanism based on their thermal fragmentation, for the evolution of combustion aerosol nano-particles. A complex and comprehensive pattern of evolution of combustion aerosols, involving particle fragmentation, was then proposed and justified. In that model it was suggested that thermal fragmentation occurs in aggregates of primary particles each of which contains a solid graphite/carbon core surrounded by volatile molecules bonded to the core by strong covalent bonds. Due to the presence of strong covalent bonds between the core and the volatile (frill) molecules, such primary composite particles can be regarded as solid, despite the presence of significant (possibly, dominant) volatile component. Fragmentation occurs when weak van der Waals forces between such primary particles are overcome by their thermal (Brownian) motion. In this work, the accepted concept of thermal fragmentation is advanced to determine whether fragmentation is likely in liquid composite nano-particles. It has been demonstrated that at least at some stages of evolution, combustion aerosols contain a large number of composite liquid particles containing presumably several components such as water, oil, volatile compounds, and minerals. It is possible that such composite liquid particles may also experience thermal fragmentation and thus contribute to, for example, the evolution of the total number concentration as a function of distance from the source. Therefore, the aim of this project is to examine theoretically the possibility of thermal fragmentation of composite liquid nano-particles consisting of immiscible liquid v components. The specific focus is on ternary systems which include two immiscible liquid droplets surrounded by another medium (e.g., air). The analysis shows that three different structures are possible, the complete encapsulation of one liquid by the other, partial encapsulation of the two liquids in a composite particle, and the two droplets separated from each other. The probability of thermal fragmentation of two coagulated liquid droplets is discussed and examined for different volumes of the immiscible fluids in a composite liquid particle and their surface and interfacial tensions through the determination of the Gibbs free energy difference between the coagulated and fragmented states, and comparison of this energy difference with the typical thermal energy kT. The analysis reveals that fragmentation was found to be much more likely for a partially encapsulated particle than a completely encapsulated particle. In particular, it was found that thermal fragmentation was much more likely when the volume ratio of the two liquid droplets that constitute the composite particle are very different. Conversely, when the two liquid droplets are of similar volumes, the probability of thermal fragmentation is small. It is also demonstrated that the Gibbs free energy difference between the coagulated and fragmented states is not the only important factor determining the probability of thermal fragmentation of composite liquid particles. The second essential factor is the actual structure of the composite particle. It is shown that the probability of thermal fragmentation is also strongly dependent on the distance that each of the liquid droplets should travel to reach the fragmented state. In particular, if this distance is larger than the mean free path for the considered droplets in the air, the probability of thermal fragmentation should be negligible. In particular, it follows form here that fragmentation of the composite particle in the state with complete encapsulation is highly unlikely because of the larger distance that the two droplets must travel in order to separate. The analysis of composite liquid particles with the interfacial parameters that are expected in combustion aerosols demonstrates that thermal fragmentation of these vi particles may occur, and this mechanism may play a role in the evolution of combustion aerosols. Conditions for thermal fragmentation to play a significant role (for aerosol particles other than those from motor vehicle exhaust) are determined and examined theoretically. Conditions for spontaneous transformation between the states of composite particles with complete and partial encapsulation are also examined, demonstrating the possibility of such transformation in combustion aerosols. Indeed it was shown that for some typical components found in aerosols that transformation could take place on time scales less than 20 s. The analysis showed that factors that influenced surface and interfacial tension played an important role in this transformation process. It is suggested that such transformation may, for example, result in a delayed evaporation of composite particles with significant water component, leading to observable effects in evolution of combustion aerosols (including possible local humidity maximums near a source, such as a busy road). The obtained results will be important for further development and understanding of aerosol physics and technologies, including combustion aerosols and their evolution near a source.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: To describe a local data linkage project to match hospital data with the Australian Institute of Health and Welfare (AIHW) National Death Index (NDI) to assess longterm outcomes of intensive care unit patients. Methods: Data were obtained from hospital intensive care and cardiac surgery databases on all patients aged 18 years and over admitted to either of two intensive care units at a tertiary-referral hospital between 1 January 1994 and 31 December 2005. Date of death was obtained from the AIHW NDI by probabilistic software matching, in addition to manual checking through hospital databases and other sources. Survival was calculated from time of ICU admission, with a censoring date of 14 February 2007. Data for patients with multiple hospital admissions requiring intensive care were analysed only from the first admission. Summary and descriptive statistics were used for preliminary data analysis. Kaplan-Meier survival analysis was used to analyse factors determining long-term survival. Results: During the study period, 21 415 unique patients had 22 552 hospital admissions that included an ICU admission; 19 058 surgical procedures were performed with a total of 20 092 ICU admissions. There were 4936 deaths. Median follow-up was 6.2 years, totalling 134 203 patient years. The casemix was predominantly cardiac surgery (80%), followed by cardiac medical (6%), and other medical (4%). The unadjusted survival at 1, 5 and 10 years was 97%, 84% and 70%, respectively. The 1-year survival ranged from 97% for cardiac surgery to 36% for cardiac arrest. An APACHE II score was available for 16 877 patients. In those discharged alive from hospital, the 1, 5 and 10-year survival varied with discharge location. Conclusions: ICU-based linkage projects are feasible to determine long-term outcomes of ICU patients

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An Approach with Vertical Guidance (APV) is an instrument approach procedure which provides horizontal and vertical guidance to a pilot on approach to landing in reduced visibility conditions. APV approaches can greatly reduce the safety risk to general aviation by improving the pilot’s situational awareness. In particular the incidence of Controlled Flight Into Terrain (CFIT) which has occurred in a number of fatal air crashes in general aviation over the past decade in Australia, can be reduced. APV approaches can also improve general aviation operations. If implemented at Australian airports, APV approach procedures are expected to bring a cost saving of millions of dollars to the economy due to fewer missed approaches, diversions and an increased safety benefit. The provision of accurate horizontal and vertical guidance is achievable using the Global Positioning System (GPS). Because aviation is a safety of life application, an aviation-certified GPS receiver must have integrity monitoring or augmentation to ensure that its navigation solution can be trusted. However, the difficulty with the current GPS satellite constellation alone meeting APV integrity requirements, the susceptibility of GPS to jamming or interference and the potential shortcomings of proposed augmentation solutions for Australia such as the Ground-based Regional Augmentation System (GRAS) justifies the investigation of Aircraft Based Augmentation Systems (ABAS) as an alternative integrity solution for general aviation. ABAS augments GPS with other sensors at the aircraft to help it meet the integrity requirements. Typical ABAS designs assume high quality inertial sensors to provide an accurate reference trajectory for Kalman filters. Unfortunately high-quality inertial sensors are too expensive for general aviation. In contrast to these approaches the purpose of this research is to investigate fusing GPS with lower-cost Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU) and a mathematical model of aircraft dynamics, referred to as an Aircraft Dynamic Model (ADM) in this thesis. Using a model of aircraft dynamics in navigation systems has been studied before in the available literature and shown to be useful particularly for aiding inertial coasting or attitude determination. In contrast to these applications, this thesis investigates its use in ABAS. This thesis presents an ABAS architecture concept which makes use of a MEMS IMU and ADM, named the General Aviation GPS Integrity System (GAGIS) for convenience. GAGIS includes a GPS, MEMS IMU, ADM, a bank of Extended Kalman Filters (EKF) and uses the Normalized Solution Separation (NSS) method for fault detection. The GPS, IMU and ADM information is fused together in a tightly-coupled configuration, with frequent GPS updates applied to correct the IMU and ADM. The use of both IMU and ADM allows for a number of different possible configurations. Three are investigated in this thesis; a GPS-IMU EKF, a GPS-ADM EKF and a GPS-IMU-ADM EKF. The integrity monitoring performance of the GPS-IMU EKF, GPS-ADM EKF and GPS-IMU-ADM EKF architectures are compared against each other and against a stand-alone GPS architecture in a series of computer simulation tests of an APV approach. Typical GPS, IMU, ADM and environmental errors are simulated. The simulation results show the GPS integrity monitoring performance achievable by augmenting GPS with an ADM and low-cost IMU for a general aviation aircraft on an APV approach. A contribution to research is made in determining whether a low-cost IMU or ADM can provide improved integrity monitoring performance over stand-alone GPS. It is found that a reduction of approximately 50% in protection levels is possible using the GPS-IMU EKF or GPS-ADM EKF as well as faster detection of a slowly growing ramp fault on a GPS pseudorange measurement. A second contribution is made in determining how augmenting GPS with an ADM compares to using a low-cost IMU. By comparing the results for the GPS-ADM EKF against the GPS-IMU EKF it is found that protection levels for the GPS-ADM EKF were only approximately 2% higher. This indicates that the GPS-ADM EKF may potentially replace the GPS-IMU EKF for integrity monitoring should the IMU ever fail. In this way the ADM may contribute to the navigation system robustness and redundancy. To investigate this further, a third contribution is made in determining whether or not the ADM can function as an IMU replacement to improve navigation system redundancy by investigating the case of three IMU accelerometers failing. It is found that the failed IMU measurements may be supplemented by the ADM and adequate integrity monitoring performance achieved. Besides treating the IMU and ADM separately as in the GPS-IMU EKF and GPS-ADM EKF, a fourth contribution is made in investigating the possibility of fusing the IMU and ADM information together to achieve greater performance than either alone. This is investigated using the GPS-IMU-ADM EKF. It is found that the GPS-IMU-ADM EKF can achieve protection levels approximately 3% lower in the horizontal and 6% lower in the vertical than a GPS-IMU EKF. However this small improvement may not justify the complexity of fusing the IMU with an ADM in practical systems. Affordable ABAS in general aviation may enhance existing GPS-only fault detection solutions or help overcome any outages in augmentation systems such as the Ground-based Regional Augmentation System (GRAS). Countries such as Australia which currently do not have an augmentation solution for general aviation could especially benefit from the economic savings and safety benefits of satellite navigation-based APV approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the roles of finn and country level agency conflicts in determining corporate payout policics. Based on a large sample of 29,610 firms in 42 countries from 2001 to 2006, we show there is a form of "pecking order" in investors' ability to extract cash (whether as dividends only or share repurchases) from firms. Although investors are able to use their legal powers to extract cash from firms in high protection countries, their ability to do so can be substantially hindered when agency costs at the firm level are high. In poor protection countries, investors seem to take whatever cash they can get, even though the amount may be small, and with scant regard for investment opportunities and firm level agency conflicts. Finally, compared to repurchases, we find dividends are more likely to be the sole method of payout in high protection countries and in non insider-dominated firms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a much anticipated judgment, the Federal Circuit has sought to clarify the standards applicable in determining whether a claimed method constitutes patent-eligible subject matter. In Bilski, the Federal Circuit identified a test to determine whether a patentee has made claims that pre-empt the use of a fundamental principle or an abstract idea or whether those claims cover only a particular application of a fundamental principle or abstract idea. It held that the sole test for determining subject matter eligibility for a claimed process under § 101 is that: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing. The court termed this the “machine-or-transformation test.” In so doing it overruled its earlier State Street decision to the extent that it deemed its “useful, tangible and concrete result” test as inadequate to determine whether an alleged invention recites patent-eligible subject matter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The existence of any film genre depends on the effective operation of distribution networks. Contingencies of distribution play an important role in determining the content of individual texts and the characteristics of film genres; they enable new genres to emerge at the same time as they impose limits on generic change. This article sets out an alternative way of doing genre studies, based on an analysis of distributive circuits rather than film texts or generic categories. Our objective is to provide a conceptual framework that can account for the multiple ways in which distribution networks leave their traces on film texts and audience expectations, with specific reference to international horror networks, and to offer some preliminary suggestions as to how distribution analysis can be integrated into existing genre studies methodologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider multi-robot systems that include sensor nodes and aerial or ground robots networked together. Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations. We present a sensor network deployment method using autonomous aerial vehicles and describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data collected from field trials. A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for repair, to complete the connectivity. This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth). © Springer-Verlag Berlin/Heidelberg 2006.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes automation of the digging cycle of a mining rope shovel which considers autonomous dipper (bucket) filling and determining methods to detect when to disengage the dipper from the bank. Novel techniques to overcome dipper stall and the online estimation of dipper "fullness" are described with in-field experimental results of laser DTM generation, machine automation and digging using a 1/7th scale model rope shovel presented. © 2006 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a sensor network deployment method using autonomous flying robots. Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations. We describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data we collected from field trials. A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for a second, repair, pass to complete the connectivity. This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth.).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing literature has failed to find robust relationships between individual differences and the ability to fake psychological tests, possibly due to limitations in how successful faking is operationalised. In order to fake, individuals must alter their original profile to create a particular impression. Currently, successful faking is operationalised through statistical definitions, informant ratings, known groups comparisons, the use of archival and baseline data, and breaches of validity indexes. However, there are many methodological limitations to these approaches. This research proposed a three component model of successful faking to address this, where an original response is manipulated into a strategic response, which must match a criteria target. Further, by operationalising successful faking in this manner, this research takes into account the fact that individuals may have been successful in reaching their implicitly created profile, but that this may not have matched the criteria they were instructed to fake.Participants (N=48, 22 students and 26 non-students) completed the BDI-II honestly. Participants then faked the BDI-II as if they had no, mild, moderate and severe depression, as well as completing a checklist revealing which symptoms they thought indicated each level of depression. Findings were consistent with a three component model of successful faking, where individuals effectively changed their profile to what they believed was required, however this profile differed from the criteria defined by the psychometric norms of the test.One of the foremost issues for research in this area is the inconsistent manner in which successful faking is operationalised. This research allowed successful faking to be operationalised in an objective, quantifiable manner. Using this model as a template may allow researchers better understanding of the processes involved in faking, including the role of strategies and abilities in determining the outcome of test dissimulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Powerful brands create meaningful images in the minds of customers (Keller, 1993). A strong brand image and reputation enhances differentiation and has a positive influence on buying behaviour (Gordon et al., 1993; McEnally and de Chernatony, 1999). While the power of branding is widely acknowledged in consumer markets, the nature and importance of branding in industrial markets remains under-researched. Many business-to-business (B2B) strategists have claimed brand-building belongs in the consumer realm. They argue that industrial products do not need branding as it is confusing and adds little value to functional products (Collins, 1977; Lorge, 1998; Saunders and Watt, 1979). Others argue that branding and the concept of brand equity however are increasingly important in industrial markets, because it has been shown that what a brand means to a buyer can be a determining factor in deciding between industrial purchase alternatives (Aaker, 1991). In this context, it is critical for suppliers to initiate and sustain relationships due to the small number of potential customers (Ambler, 1995; Webster and Keller, 2004). To date however, there is no model available to assist B2B marketers in identifying and measuring brand equity. In this paper, we take a step in that direction by operationalising and empirically testing a prominent brand equity model in a B2B context. This makes not only a theoretical contribution by advancing branding research, but also addresses a managerial need for information that will assist in the assessment of industrial branding efforts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a critical review of the literature to assess the efficacy of monotherapy and subsequent combinant anticonvulsant therapy in the treatment of neonatal seizures, four studies were examined; three randomised control trials and one retrospective cohort study. Each study used phenobarbital for monotherapy with doses reaching a maximum of 40mg/kg. Anticonvulsant drugs used in conjunction with phenobarbitone for combinant therapy included midazolam, clonazepam, lorazepam, phenytoin and lignocaine. Each study used an electroencephalograph for seizure diagnosis and neonatal monitoring when determining therapy efficacy and final outcome assessments. Collectively the studies suggest neither monotherapy nor combinant therapy are entirely effective in seizure control. Monotherapy demonstrated a 29% - 50% success rate for complete seizure control whereas combinant therapy administered after the failure of monotherapy demonstrated a success rate of 43% - 100%. When these trials were combined the overall success for monotherapy was 44% (n = 34/78) and for combinant therapy 72% ( n = 56/78). Though the evidence was inconclusive, it would appear that combinant therapy is of greater benefit to infants unresponsive to monotherapy. Further research such as multi-site randomised controlled trials using standardised criteria and data collection are required within this specialised area.