941 resultados para pay
Resumo:
With the advancement of Service-Oriented Architecture in the technical and business domain, the management & engineering of services requires a thorough and systematic understanding of the service lifecycle for both business and software services. However, while service-oriented approaches acknowledge the importance of the service ecosystem, service lifecycle models are typically internally focused, paying limited attention to processes related to offering services to or using services from other actors. In this paper, we address this need by discussing the relations between a comprehensive service lifecycle approach for service management & engineering and the sourcing & purchasing of services. In particular we pay attention to the similarities and differences between sourcing business and software services, the alignment between service management & engineering and sourcing & purchasing, the role of sourcing in the transformation of an organization towards a service-oriented paradigm, the role of architectural approaches to sourcing in this transformation, and the sourcing of specific services at different levels of granularity.
Resumo:
Police work tasks are diverse and require the ability to take command, demonstrate leadership, make serious decisions and be self directed (Beck, 1999; Brunetto & Farr-Wharton, 2002; Howard, Donofrio & Boles, 2002). This work is usually performed in pairs or sometimes by an officer working alone. Operational police work is seldom performed under the watchful eyes of a supervisor and a great amount of reliance is placed on the high levels of motivation and professionalism of individual officers. Research has shown that highly motivated workers produce better outcomes (Whisenand & Rush, 1998; Herzberg, 2003). It is therefore important that Queensland police officers are highly motivated to provide a quality service to the Queensland community. This research aims to identify factors which motivate Queensland police to perform quality work. Researchers acknowledge that there is a lack of research and knowledge in regard to the factors which motivate police (Beck, 1999; Bragg, 1998; Howard, Donofrio & Boles, 2002; McHugh & Verner, 1998). The motivational factors were identified in regard to the demographic variables of; age, sex, rank, tenure and education. The model for this research is Herzberg’s two-factor theory of workplace motivation (1959). Herzberg found that there are two broad types of workplace motivational factors; those driven by a need to prevent loss or harm and those driven by a need to gain personal satisfaction or achievement. His study identified 16 basic sub-factors that operate in the workplace. The research utilised a questionnaire instrument based on the sub-factors identified by Herzberg (1959). The questionnaire format consists of an initial section which sought demographic information about the participant and is followed by 51 Likert scale questions. The instrument is an expanded version of an instrument previously used in doctoral studies to identify sources of police motivation (Holden, 1980; Chiou, 2004). The questionnaire was forwarded to approximately 960 police in the Brisbane, Metropolitan North Region. The data were analysed using Factor Analysis, MANOVAs, ANOVAs and multiple regression analysis to identify the key sources of police motivation and to determine the relationships between demographic variables such as: age, rank, educational level, tenure, generation cohort and motivational factors. A total of 484 officers responded to the questionnaire from the sample population of 960. Factor analysis revealed five broad Prime Motivational Factors that motivate police in their work. The Prime Motivational Factors are: Feeling Valued, Achievement, Workplace Relationships, the Work Itself and Pay and Conditions. The factor Feeling Valued highlighted the importance of positive supportive leaders in motivating officers. Many officers commented that supervisors who only provided negative feedback diminished their sense of feeling valued and were a key source of de-motivation. Officers also frequently commented that they were motivated by operational police work itself whilst demonstrating a strong sense of identity with their team and colleagues. The study showed a general need for acceptance by peers and an idealistic motivation to assist members of the community in need and protect victims of crime. Generational cohorts were not found to exert a significant influence on police motivation. The demographic variable with the single greatest influence on police motivation was tenure. Motivation levels were found to drop dramatically during the first two years of an officer’s service and generally not improve significantly until near retirement age. The findings of this research provide the foundation of a number of recommendations in regard to police retirement, training and work allocation that are aimed to improve police motivation levels. The five Prime Motivational Factor model developed in this study is recommended for use as a planning tool by police leaders to improve motivational and job-satisfaction components of police Service policies. The findings of this study also provide a better understanding of the current sources of police motivation. They are expected to have valuable application for Queensland police human resource management when considering policies and procedures in the areas of motivation, stress reduction and attracting suitable staff to specific areas of responsibility.
Resumo:
To investigate the meaning and understanding of domestic food preparation within the lived experience of the household's main food preparer this ethnographic study used a combination of qualitative and quantitative methodologies. Data were collected from three sources: the literature; an in-store survey of251 food shoppers chosen at random while shopping during both peak and off peak shopping periods at metropolitan supermarkets; and semi-structured interviews with the principal food shopper and food preparer of 15 different Brisbane households. Male and female respondents representing a cross section of socio-economic groupings, ranged in age from 19-79 years and were all from English speaking backgrounds. Changes in paid labour force participation, income and education have increased the value of the respondents' time, instigating massive changes in the way they shop, cook and eat. Much of their food preparation has moved from the domestic kitchen into the kitchens of other food establishments. For both sexes, the dominant motivating force behind these changes is a combination of the their self perceived lack of culinary skill; lack of enjoyment of cooking and lack of motivation to cook. The females in paid employment emphasise all factors, particularly the latter two, significantly more than the non-employed females. All factors are of increasing importance for individuals aged less than 35 years and conversely, of significantly diminished importance to older respondents. Overall, it is the respondents aged less than 25 years who indicate the lowest cooking frequency and/or least cooking ability. Inherent in this latter group is an indifference to the art/practice of preparing food. Increasingly, all respondents want to do less cooking and/or get the cooking over with as quickly as possible. Convenience is a powerful lure by which to spend less time in the kitchen. As well, there is an apparent willingness to pay a premium for convenience. Because children today are increasingly unlikely to be taught to cook, addressing the food skills deficit and encouraging individuals to cook for themselves are significant issues confronting health educators. These issues are suggested as appropriate subjects of future research.
Resumo:
Stream ciphers are encryption algorithms used for ensuring the privacy of digital telecommunications. They have been widely used for encrypting military communications, satellite communications, pay TV encryption and for voice encryption of both fixed lined and wireless networks. The current multi year European project eSTREAM, which aims to select stream ciphers suitable for widespread adoptation, reflects the importance of this area of research. Stream ciphers consist of a keystream generator and an output function. Keystream generators produce a sequence that appears to be random, which is combined with the plaintext message using the output function. Most commonly, the output function is binary addition modulo two. Cryptanalysis of these ciphers focuses largely on analysis of the keystream generators and of relationships between the generator and the keystream it produces. Linear feedback shift registers are widely used components in building keystream generators, as the sequences they produce are well understood. Many types of attack have been proposed for breaking various LFSR based stream ciphers. A recent attack type is known as an algebraic attack. Algebraic attacks transform the problem of recovering the key into a problem of solving multivariate system of equations, which eventually recover the internal state bits or the key bits. This type of attack has been shown to be effective on a number of regularly clocked LFSR based stream ciphers. In this thesis, algebraic attacks are extended to a number of well known stream ciphers where at least one LFSR in the system is irregularly clocked. Applying algebriac attacks to these ciphers has only been discussed previously in the open literature for LILI-128. In this thesis, algebraic attacks are first applied to keystream generators using stop-and go clocking. Four ciphers belonging to this group are investigated: the Beth-Piper stop-and-go generator, the alternating step generator, the Gollmann cascade generator and the eSTREAM candidate: the Pomaranch cipher. It is shown that algebraic attacks are very effective on the first three of these ciphers. Although no effective algebraic attack was found for Pomaranch, the algebraic analysis lead to some interesting findings including weaknesses that may be exploited in future attacks. Algebraic attacks are then applied to keystream generators using (p; q) clocking. Two well known examples of such ciphers, the step1/step2 generator and the self decimated generator are investigated. Algebraic attacks are shown to be very powerful attack in recovering the internal state of these generators. A more complex clocking mechanism than either stop-and-go or the (p; q) clocking keystream generators is known as mutual clock control. In mutual clock control generators, the LFSRs control the clocking of each other. Four well known stream ciphers belonging to this group are investigated with respect to algebraic attacks: the Bilateral-stop-and-go generator, A5/1 stream cipher, Alpha 1 stream cipher, and the more recent eSTREAM proposal, the MICKEY stream ciphers. Some theoretical results with regards to the complexity of algebraic attacks on these ciphers are presented. The algebraic analysis of these ciphers showed that generally, it is hard to generate the system of equations required for an algebraic attack on these ciphers. As the algebraic attack could not be applied directly on these ciphers, a different approach was used, namely guessing some bits of the internal state, in order to reduce the degree of the equations. Finally, an algebraic attack on Alpha 1 that requires only 128 bits of keystream to recover the 128 internal state bits is presented. An essential process associated with stream cipher proposals is key initialization. Many recently proposed stream ciphers use an algorithm to initialize the large internal state with a smaller key and possibly publicly known initialization vectors. The effect of key initialization on the performance of algebraic attacks is also investigated in this thesis. The relationships between the two have not been investigated before in the open literature. The investigation is conducted on Trivium and Grain-128, two eSTREAM ciphers. It is shown that the key initialization process has an effect on the success of algebraic attacks, unlike other conventional attacks. In particular, the key initialization process allows an attacker to firstly generate a small number of equations of low degree and then perform an algebraic attack using multiple keystreams. The effect of the number of iterations performed during key initialization is investigated. It is shown that both the number of iterations and the maximum number of initialization vectors to be used with one key should be carefully chosen. Some experimental results on Trivium and Grain-128 are then presented. Finally, the security with respect to algebraic attacks of the well known LILI family of stream ciphers, including the unbroken LILI-II, is investigated. These are irregularly clock- controlled nonlinear filtered generators. While the structure is defined for the LILI family, a particular paramater choice defines a specific instance. Two well known such instances are LILI-128 and LILI-II. The security of these and other instances is investigated to identify which instances are vulnerable to algebraic attacks. The feasibility of recovering the key bits using algebraic attacks is then investigated for both LILI- 128 and LILI-II. Algebraic attacks which recover the internal state with less effort than exhaustive key search are possible for LILI-128 but not for LILI-II. Given the internal state at some point in time, the feasibility of recovering the key bits is also investigated, showing that the parameters used in the key initialization process, if poorly chosen, can lead to a key recovery using algebraic attacks.
Resumo:
The topic of the present work is to study the relationship between the power of the learning algorithms on the one hand, and the expressive power of the logical language which is used to represent the problems to be learned on the other hand. The central question is whether enriching the language results in more learning power. In order to make the question relevant and nontrivial, it is required that both texts (sequences of data) and hypotheses (guesses) be translatable from the “rich” language into the “poor” one. The issue is considered for several logical languages suitable to describe structures whose domain is the set of natural numbers. It is shown that enriching the language does not give any advantage for those languages which define a monadic second-order language being decidable in the following sense: there is a fixed interpretation in the structure of natural numbers such that the set of sentences of this extended language true in that structure is decidable. But enriching the original language even by only one constant gives an advantage if this language contains a binary function symbol (which will be interpreted as addition). Furthermore, it is shown that behaviourally correct learning has exactly the same power as learning in the limit for those languages which define a monadic second-order language with the property given above, but has more power in case of languages containing a binary function symbol. Adding the natural requirement that the set of all structures to be learned is recursively enumerable, it is shown that it pays o6 to enrich the language of arithmetics for both finite learning and learning in the limit, but it does not pay off to enrich the language for behaviourally correct learning.
Resumo:
This article reviews what international evidence exists on the impact of civil and criminal sanctions upon serious tax noncompliance by individuals. This construct lacks sharp definitional boundaries but includes large tax fraud and large-scale evasion that are not dealt with as fraud. Although substantial research and theory have been developed on general tax evasion and compliance, their conclusions might not apply to large-scale intentional fraudsters. No scientifically defensible studies directly compared civil and criminal sanctions for tax fraud, although one U.S. study reported that significantly enhanced criminal sanctions have more effects than enhanced audit levels. Prosecution is public, whereas administrative penalties are confidential, and this fact encourages those caught to pay heavy penalties to avoid publicity, a criminal record, and imprisonment.
Resumo:
Introduction The Australian Nurse Practitioner Project (AUSPRAC) was initiated to examine the introduction of nurse practitioners into the Australian health service environment. The nurse practitioner concept was introduced to Australia over two decades ago and has been evolving since. Today, however, the scope of practice, role and educational preparation of nurse practitioners is well defined (Gardner et al, 2006). Amendments to specific pre-existing legislation at a State level have permitted nurse practitioners to perform additional activities including some once in the domain of the medical profession. In the Australian Capital Territory, for example 13 diverse Acts and Regulations required amendments and three new Acts were established (ACT Health, 2006). Nurse practitioners are now legally authorized to diagnose, treat, refer and prescribe medications in all Australian states and territories. These extended practices differentiate nurse practitioners from other advanced practice roles in nursing (Gardner, Chang & Duffield, 2007). There are, however, obstacles for nurse practitioners wishing to use these extended practices. Restrictive access to Medicare funding via the Medicare Benefit Scheme (MBS) and the Pharmaceutical Benefit Scheme (PBS) limit the scope of nurse practitioner service in the private health sector and community settings. A recent survey of Australian nurse practitioners (n=202) found that two-thirds of respondents (66%) stated that lack of legislative support limited their practice. Specifically, 78% stated that lack of a Medicare provider number was ‘extremely limiting’ to their practice and 71% stated that no access to the PBS was ‘extremely limiting’ to their practice (Gardner et al, in press). Changes to Commonwealth legislation is needed to enable nurse practitioners to prescribe medication so that patients have access to PBS subsidies where they exist; currently patients with scripts which originated from nurse practitioners must pay in full for these prescriptions filled outside public hospitals. This report presents findings from a sub-study of Phase Two of AUSPRAC. Phase Two was designed to enable investigation of the process and activities of nurse practitioner service. Process measurements of nurse practitioner services are valuable to healthcare organisations and service providers (Middleton, 2007). Processes of practice can be evaluated through clinical audit, however as Middleton cautions, no direct relationship between these processes and patient outcomes can be assumed.
Resumo:
This paper describes a thorough thermal study on a fleet of DC traction motors which were found to suffer from overheating after 3 years of full operation. Overheating of these traction motors is attributed partly because of the higher than expected number of starts and stops between train terminals. Another probable cause of overheating is the design of the traction motor and/or its control strategy. According to the motor manufacturer, a current shunt is permanently connected across the motor field winding. Hence, some of the armature current is bypassed into the current shunt. The motor then runs above its rated speed in the field weakening mode. In this study, a finite difference model has been developed to simulate the temperature profile at different parts inside the traction motor. In order to validate the simulation result, an empty vehicle loaded with drums of water was also used to simulate the full pay-load of a light rail vehicle experimentally. The authors report that the simulation results agree reasonably well with experimental data, and it is likely that the armature of the traction motor will run cooler if its field shunt is disconnected at low speeds
Resumo:
Principal Topic Venture ideas are at the heart of entrepreneurship (Davidsson, 2004). However, we are yet to learn what factors drive entrepreneurs’ perceptions of the attractiveness of venture ideas, and what the relative importance of these factors are for their decision to pursue an idea. The expected financial gain is one factor that will obviously influence the perceived attractiveness of a venture idea (Shepherd & DeTienne, 2005). In addition, the degree of novelty of venture ideas along one or more dimensions such as new products/services, new method of production, enter into new markets/customer and new method of promotion may affect their attractiveness (Schumpeter, 1934). Further, according to the notion of an individual-opportunity nexus venture ideas are closely associated with certain individual characteristics (relatedness). Shane (2000) empirically identified that individual’s prior knowledge is closely associated with the recognition of venture ideas. Sarasvathy’s (2001; 2008) Effectuation theory proposes a high degree of relatedness between venture ideas and the resource position of the individual. This study examines how entrepreneurs weigh considerations of different forms of novelty and relatedness as well as potential financial gain in assessing the attractiveness of venture ideas. Method I use conjoint analysis to determine how expert entrepreneurs develop preferences for venture ideas which involved with different degrees of novelty, relatedness and potential gain. The conjoint analysis estimates respondents’ preferences in terms of utilities (or part-worth) for each level of novelty, relatedness and potential gain of venture ideas. A sample of 32 expert entrepreneurs who were awarded young entrepreneurship awards were selected for the study. Each respondent was interviewed providing with 32 scenarios which explicate different combinations of possible profiles open them into consideration. Results and Implications Results indicate that while the respondents do not prefer mere imitation they receive higher utility for low to medium degree of newness suggesting that high degrees of newness are fraught with greater risk and/or greater resource needs. Respondents pay considerable weight on alignment with the knowledge and skills they already posses in choosing particular venture idea. The initial resource position of entrepreneurs is not equally important. Even though expected potential financial gain gives substantial utility, result indicate that it is not a dominant factor for the attractiveness of venture idea.
Resumo:
AC motors are largely used in a wide range of modern systems, from household appliances to automated industry applications such as: ventilations systems, fans, pumps, conveyors and machine tool drives. Inverters are widely used in industrial and commercial applications due to the growing need for speed control in ASD systems. Fast switching transients and the common mode voltage, in interaction with parasitic capacitive couplings, may cause many unwanted problems in the ASD applications. These include shaft voltage and leakage currents. One of the inherent characteristics of Pulse Width Modulation (PWM) techniques is the generation of the common mode voltage, which is defined as the voltage between the electrical neutral of the inverter output and the ground. Shaft voltage can cause bearing currents when it exceeds the amount of breakdown voltage level of the thin lubricant film between the inner and outer rings of the bearing. This phenomenon is the main reason for early bearing failures. A rapid development in power switches technology has lead to a drastic decrement of switching rise and fall times. Because there is considerable capacitance between the stator windings and the frame, there can be a significant capacitive current (ground current escaping to earth through stray capacitors inside a motor) if the common mode voltage has high frequency components. This current leads to noises and Electromagnetic Interferences (EMI) issues in motor drive systems. These problems have been dealt with using a variety of methods which have been reported in the literature. However, cost and maintenance issues have prevented these methods from being widely accepted. Extra cost or rating of the inverter switches is usually the price to pay for such approaches. Thus, the determination of cost-effective techniques for shaft and common mode voltage reduction in ASD systems, with the focus on the first step of the design process, is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. Electrical power generation from renewable energy sources, such as wind energy systems, has become a crucial issue because of environmental problems and a predicted future shortage of traditional energy sources. Thus, Chapter 2 focuses on the shaft voltage analysis of stator-fed induction generators (IG) and Doubly Fed Induction Generators DFIGs in wind turbine applications. This shaft voltage analysis includes: topologies, high frequency modelling, calculation and mitigation techniques. A back-to-back AC-DC-AC converter is investigated in terms of shaft voltage generation in a DFIG. Different topologies of LC filter placement are analysed in an effort to eliminate the shaft voltage. Different capacitive couplings exist in the motor/generator structure and any change in design parameters affects the capacitive couplings. Thus, an appropriate design for AC motors should lead to the smallest possible shaft voltage. Calculation of the shaft voltage based on different capacitive couplings, and an investigation of the effects of different design parameters are discussed in Chapter 3. This is achieved through 2-D and 3-D finite element simulation and experimental analysis. End-winding parameters of the motor are also effective factors in the calculation of the shaft voltage and have not been taken into account in previous reported studies. Calculation of the end-winding capacitances is rather complex because of the diversity of end winding shapes and the complexity of their geometry. A comprehensive analysis of these capacitances has been carried out with 3-D finite element simulations and experimental studies to determine their effective design parameters. These are documented in Chapter 4. Results of this analysis show that, by choosing appropriate design parameters, it is possible to decrease the shaft voltage and resultant bearing current in the primary stage of generator/motor design without using any additional active and passive filter-based techniques. The common mode voltage is defined by a switching pattern and, by using the appropriate pattern; the common mode voltage level can be controlled. Therefore, any PWM pattern which eliminates or minimizes the common mode voltage will be an effective shaft voltage reduction technique. Thus, common mode voltage reduction of a three-phase AC motor supplied with a single-phase diode rectifier is the focus of Chapter 5. The proposed strategy is mainly based on proper utilization of the zero vectors. Multilevel inverters are also used in ASD systems which have more voltage levels and switching states, and can provide more possibilities to reduce common mode voltage. A description of common mode voltage of multilevel inverters is investigated in Chapter 6. Chapter 7 investigates the elimination techniques of the shaft voltage in a DFIG based on the methods presented in the literature by the use of simulation results. However, it could be shown that every solution to reduce the shaft voltage in DFIG systems has its own characteristics, and these have to be taken into account in determining the most effective strategy. Calculation of the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions is discussed in Chapter 8. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.
Resumo:
INTRODUCTION: Since the introduction of its QUT ePrints institutional repository of published research outputs, together with the world’s first mandate for author contributions to an institutional repository, Queensland University of Technology (QUT) has been a leader in support of green road open access. With QUT ePrints providing our mechanism for supporting the green road to open access, QUT has since then also continued to expand its secondary open access strategy supporting gold road open access, which is also designed to assist QUT researchers to maximise the accessibility and so impact of their research. ---------- METHODS: QUT Library has adopted the position of selectively supporting true gold road open access publishing by using the Library Resource Allocation budget to pay the author publication fees for QUT authors wishing to publish in the open access journals of a range of publishers including BioMed Central, Public Library of Science and Hindawi. QUT Library has been careful to support only true open access publishers and not those open access publishers with hybrid models which “double dip” by charging authors publication fees and libraries subscription fees for the same journal content. QUT Library has maintained a watch on the growing number of open access journals available from gold road open access publishers and their increased rate of success as measured by publication impact. ---------- RESULTS: This paper reports on the successes and challenges of QUT’s efforts to support true gold road open access publishers and promote these publishing strategy options to researchers at QUT. The number and spread of QUT papers submitted and published in the journals of each publisher is provided. Citation counts for papers and authors are also presented and analysed, with the intention of identifying the benefits to accessibility and research impact for early career and established researchers.---------- CONCLUSIONS: QUT Library is eager to continue and further develop support for this publishing strategy, and makes a number of recommendations to other research institutions, on how they can best achieve success with this strategy.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
During the spring of 1987, 1,215 samples of spring oats (Avena sativa L.) were collected in Madison, Champaign, Woodford, Warren, and DeKalb counties, Illinois. At each site on each of three sampling dates, 45 samples were collected (regardless of symptoms) in a W pattern in I ha and tested for the PAY, MAV, RPV, and RMV serotypes of barley yellow dwarf virus (BYDV) by direct doubleantibody sandwich enzyme-linked immunosorbent assay (ELISA). RMV was not detected at any location. PAY and RPV were detected at all locations, as early as 17 April in Champaign County. The incidences of P A V and RPV from all plants sampled ranged from 2 to 64% and from 2 to 88%, respectively. Highest incidences of both strains were in May samples [rom Woodford County. MAV was detected in lower incidences (2-16%) only in samples from the central region of the state (Champaign, Woodford, and Warren counties). The presence of MA V serotypes was confirmed in triple-antibody sandwich ELISA with the MA V -specific MAFF2 monoclonal antibody from L. Torrance. In the last previous survey for BYDV in Illinois during 1967-1968 (1), about 75% of the isolates were PAY and about 20% were RPV; single isolates of RMV and MAV were found. Twenty years later, 55% were PAY, 39% were RPV, and 6% were MAV.
Resumo:
A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.
Resumo:
This article looks at a Chinese Web 2.0 original literature site, Qidian, in order to show the coevolution of market and non-market initiatives. The analytic framework of social network markets (Potts et al., 2008) is employed to analyse the motivations of publishing original literature works online and to understand the support mechanisms of the site, which encourage readers’ willingness to pay for user-generated content. The co-existence of socio-cultural and commercial economies and their impact on the successful business model of the site are illustrated in this case. This article extends the concept of social network markets by proposing the existence of a ripple effect of social network markets through convergence between PC and mobile internet, traditional and internet publishing, and between publishing and other cultural industries. It also examines the side effects of social network markets, and the role of market and non-market strategies in addressing the issues.