904 resultados para GST and incapacitated entities


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses the use of comparative performance measurement by means of Data Envelopment Analysis in the context of the regulation of English and Welsh water companies. Specifically, the use of Data Envelopment Analysis to estimate potential cost savings in sewerage is discussed as it fed into the price review of water companies carried out by the regulator of water companies in 1994. The application is used as a vehicle for highlighting generic issues in terms of assessing the impact of factors on the ranking of units on performance, the insights gained from using alternative methods to assess comparative performance, and the issue of assessing comparative performance when few in number but highly complex entities are involved. The paper should prove of interest to those interested in regulation and, more generally, in the use of methods of comparative performance measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Often observations are nested within other units. This is particularly the case in the educational sector where school performance in terms of value added is the result of school contribution as well as pupil academic ability and other features relating to the pupil. Traditionally, the literature uses parametric (i.e. it assumes a priori a particular function on the production process) Multi-Level Models to estimate the performance of nested entities. This paper discusses the use of the non-parametric (i.e. without a priori assumptions on the production process) Free Disposal Hull model as an alternative approach. While taking into account contextual characteristics as well as atypical observations, we show how to decompose non-parametrically the overall inefficiency of a pupil into a unit specific and a higher level (i.e. a school) component. By a sample of entry and exit attainments of 3017 girls in British ordinary single sex schools, we test the robustness of the non-parametric and parametric estimates. We find that the two methods agree in the relative measures of the scope for potential attainment improvement. Further, the two methods agree on the variation in pupil attainment and the proportion attributable to pupil and school level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PCA/FA is a method of analyzing complex data sets in which there are no clearly defined X or Y variables. It has multiple uses including the study of the pattern of variation between individual entities such as patients with particular disorders and the detailed study of descriptive variables. In most applications, variables are related to a smaller number of ‘factors’ or PCs that account for the maximum variance in the data and hence, may explain important trends among the variables. An increasingly important application of the method is in the ‘validation’ of questionnaires that attempt to relate subjective aspects of a patients experience with more objective measures of vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attitudes towards the environment can be manifest in two broad categories, namely anthropocentric and ecocentric. The former regards nature as of value only insofar as it is useful to humanity, whereas the latter assigns intrinsic value to natural entities. Industrial society can be characterised as being dominated by anthropocentrism, which leads to the assumption that a majority of people hold anthropocentric values. However, research shows the most widely held values are ecocentric, which implies that many people's actions are at variance with their values. Furthermore, policy relating to environmental issues is predominantly anthropocentric, which implies it is failing to take account of the values of the majority. Research among experts involved in policy formulation has shown that their values, often ecocentric, are excluded from the policy process. The genetic modification of food can be categorised as anthropocentric, which implies that the technique is in conflict with widely held ecocentric values. This thesis examines data collected from interviews with individuals who have an influence on the debate surrounding the introduction of genetically modified foods, and can be considered 'experts'. Each interviewee is categorised according to whether their values and actions are ecocentric or anthropocentric, and the linkages between the two and the arguments used to justify their positions are explored. Particular emphasis is placed on interviewees who have ecocentric values but act professionally in an anthropocentric way. Finally, common themes are drawn out, and the features the arguments used by the interviewees have in common are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been recognised for some time that a full code of amino acid-based recognition of DNA sequences would be useful. Several approaches, which utilise small DNA binding motifs called zinc fingers, are presently employed. None of the current approaches successfully combine a combinatorial approach to the elucidation of a code with a single stage high throughput screening assay. The work outlined here describes the development of a model system for the study of DNA protein interactions and the development of a high throughput assay for detection of such interactions. A zinc finger protein was designed which will bind with high affinity and specificity to a known DNA sequence. For future work it is possible to mutate the region of the zinc finger responsible for the specificity of binding, in order to observe the effect on the DNA / protein interactions. The zinc finger protein was initially synthesised as a His tagged product. It was not possible however to develop a high throughput assay using the His tagged zinc finger protein. The gene encoding the zinc finger protein was altered and the protein synthesised as a Glutathione S-Transferase (GST) fusion product. A successful assay was developed using the GST protein and Scintillation Proximity Assay technology (Amersham Pharmacia Biotech). The scintillation proximity assay is a dynamic assay that allows the DNA protein interactions to be studied in "real time". This assay not only provides a high throughput method of screening zinc finger proteins for potential ligands but also allows the effect of addition of reagents or competitor ligands to be monitored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cell culture model of the gastric epithelial cell surface would prove useful for biopharmaceutical screening of new chemical entities and dosage forms. A successful model should exhibit tight junction formation, maintenance of differentiation and polarity. Conditions for primary culture of guinea-pig gastric mucous epithelial cell monolayers on Tissue Culture Plastic (TCP) and membrane insects (Transwells) were established. Tight junction formation for cells grown on Transwells for three days was assessed by measurement of transepithelial resistance (TEER) and permeability of mannitol and fluorescein. Coating the polycarbonate filter with collagen IV, rather with collagen I, enhanced tight junction formation. TEER for cells grown on Transwells coated with collagen IV was close to that obtained with intact guinea-pig gastric epithelium in vitro. Differentiation was assessed by incorporation of [3H] glucosamine into glycoprotein and by activity of NADPH oxidase, which produces superoxide. Both of these measures were greater for cells grown on filters coated with collagen I than for cells grown on TCP, but no major difference was found between cells grown on collagens I and IV. However, monolayers grown on membranes coated with collagen IV exhibited apically polarized secretion of mucin and superoxide. The proportion of cells, which stained positively for mucin with periodic Schiff reagent, was greater than 95% for all culture conditions. Gastric epithelial monolayers grown on Transwells coated with collagen IV were able to withstand transient (30 min) apical acidification to pH 3, which was associated with a decrease in [3H] mannitol flux and an increase in TEER relative to pH 7.4. The model was used to provide the first direct demonstration that an NSAID (indomethacin) accumulated in gastric epithelial cells exposed to low apical pH. In conclusion, guinea-pig epithelial cells cultured on collagen IV represent a promising model of the gastric surface epithelium suitable for screening procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis focuses on the overall structure of the language of two types of Speech Exchange Systems (SES) : Interview (INT) and Conversation (CON). The linguistic structure of INT and CON are quantitatively investigated on three different but interrelated levels of analysis : Lexis, Syntax and Information Structure. The corpus of data 1n vest1gated for the project consists of eight sessions of pairs of conversants in carefully planned interviews followed by unplanned, surreptitiously recorded conversational encounters of the same pairs of speakers. The data comprise a total of approximately 15.200 words of INT talk and of about 19.200 words in CON. Taking account of the debatable assumption that the language of SES might be complex on certain linguistic levels (e.g. syntax) (Halliday 1979) and might be simple on others (e.g. lexis) in comparison to written discourse, the thesis sets out to investigate this complexity using a statistical approach to the computation of the structures recurrent in the language of INT and CON. The findings indicate clearly the presence of linguistic complexity in both types. They also show the language of INT to be slightly more syntactically and lexically complex than that of CON. Lexical density seems to be relatively high in both types of spoken discourse. The language of INT seems to be more complex than that of CON on the level of information structure too. This is manifested in the greater use of Inferable and other linguistically complex entities of discourse. Halliday's suggestion that the language of SES is syntactically complex is confirmed but not the one that the more casual the conversation is the more syntactically complex it becomes. The results of the analysis point to the general conclusion that the linguistic complexity of types of SES is not only in the high recurrence of syntactic structures, but also in the combination of these features with each other and with other linguistic and extralinguistic features. The linguistic analysis of the language of SES can be useful in understanding and pinpointing the intricacies of spoken discourse in general and will help discourse analysts and applied linguists in exploiting it both for theoretical and pedagogical purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The traditional method of classifying neurodegenerative diseases is based on the original clinico-pathological concept supported by 'consensus' criteria and data from molecular pathological studies. This review discusses first, current problems in classification resulting from the coexistence of different classificatory schemes, the presence of disease heterogeneity and multiple pathologies, the use of 'signature' brain lesions in diagnosis, and the existence of pathological processes common to different diseases. Second, three models of neurodegenerative disease are proposed: (1) that distinct diseases exist ('discrete' model), (2) that relatively distinct diseases exist but exhibit overlapping features ('overlap' model), and (3) that distinct diseases do not exist and neurodegenerative disease is a 'continuum' in which there is continuous variation in clinical/pathological features from one case to another ('continuum' model). Third, to distinguish between models, the distribution of the most important molecular 'signature' lesions across the different diseases is reviewed. Such lesions often have poor 'fidelity', i.e., they are not unique to individual disorders but are distributed across many diseases consistent with the overlap or continuum models. Fourth, the question of whether the current classificatory system should be rejected is considered and three alternatives are proposed, viz., objective classification, classification for convenience (a 'dissection'), or analysis as a continuum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the issues in the innovation system literature is examination of technological learning strategies of laggard nations. Two distinct bodies of literature have contributed to our insight into forces driving learning and innovation, National Systems of Innovation (NSI) and technological learning literature. Although both literatures yield insights on catch-up strategies of 'latecomer' nations, the explanatory powers of each literature by itself is limited. In this paper, a possible way of linking the macro- and the micro-level approaches by incorporating enterprises as active learning entities into the learning and innovation system is proposed. The proposed model has been used to develop research hypotheses and indicate research directions and is relevant for investigating the learning strategies of firms in less technologically intensive industries outside East Asia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational reflection is a well-established technique that gives a program the ability to dynamically observe and possibly modify its behaviour. To date, however, reflection is mainly applied either to the software architecture or its implementation. We know of no approach that fully supports requirements reflection- that is, making requirements available as runtime objects. Although there is a body of literature on requirements monitoring, such work typically generates runtime artefacts from requirements and so the requirements themselves are not directly accessible at runtime. In this paper, we define requirements reflection and a set of research challenges. Requirements reflection is important because software systems of the future will be self-managing and will need to adapt continuously to changing environmental conditions. We argue requirements reflection can support such self-adaptive systems by making requirements first-class runtime entities, thus endowing software systems with the ability to reason about, understand, explain and modify requirements at runtime. © 2010 ACM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Academic researchers have followed closely the interest of companies in establishing industrial networks by studying aspects such as social interaction and contractual relationships. But what patterns underlie the emergence of industrial networks and what support should research provide for practitioners? First, it appears that manufacturing is becoming a commodity rather than a unique capability, which accounts especially for low-technology approaches in downstream parts of the network, for example, in assembly operations. Second, the increased tendency towards specialisation has forced other, upstream, parts of industrial networks to introduce advanced manufacturing technologies for niche markets. Third, the capital market for investments in capacity, and the trade in manufacturing as a commodity, dominates resource allocation to a larger extent than was previously the case. Fourth, there is becoming a continuous move towards more loosely connected entities that comprise manufacturing networks. Finally, in these networks, concepts for supply chain management should address collaboration and information technology that supports decentralised decision-making, in particular to address sustainable and green supply chains. More traditional concepts, such as the keiretsu and chaibol networks of some Asian economies, do not sufficiently support the demands now being placed on networks. Research should address these five fundamental challenges to prepare for the industrial networks of 2020 and beyond. © 2010 Springer-Verlag London.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The enterprise management (EM) approach provides a holistic view of organizations and their related information systems. In order to align information technology (IT) innovation with global markets and volatile virtualization, traditional firms are seeking to reconstruct their enterprise structures alongside repositioning strategy and establish new information system (IS) architectures to transform from single autonomous entities into more open enterprises supported by new Enterprise Resource Planning (ERP) systems. This chapter shows how ERP engage-abilities cater to three distinctive EM patterns and resultant strategies. The purpose is to examine the presumptions and importance of combing ERP and inter-firm relations relying on the virtual value chain concept. From a review of the literature on ERP development and enterprise strategy, exploratory inductive research studies in Zoomlion and Lanye have been conducted. In addition, the authors propose a dynamic conceptual framework to demonstrate the adoption and governance of ERP in the three enterprise management forms and points to a new architectural type (ERPIII) for operating in the virtual enterprise paradigm. © 2012, IGI Global.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discovering who works with whom, on which projects and with which customers is a key task in knowledge management. Although most organizations keep models of organizational structures, these models do not necessarily accurately reflect the reality on the ground. In this paper we present a text mining method called CORDER which first recognizes named entities (NEs) of various types from Web pages, and then discovers relations from a target NE to other NEs which co-occur with it. We evaluated the method on our departmental Website. We used the CORDER method to first find related NEs of four types (organizations, people, projects, and research areas) from Web pages on the Website and then rank them according to their co-occurrence with each of the people in our department. 20 representative people were selected and each of them was presented with ranked lists of each type of NE. Each person specified whether these NEs were related to him/her and changed or confirmed their rankings. Our results indicate that the method can find the NEs with which these people are closely related and provide accurate rankings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a text mining method called LRD (latent relation discovery), which extends the traditional vector space model of document representation in order to improve information retrieval (IR) on documents and document clustering. Our LRD method extracts terms and entities, such as person, organization, or project names, and discovers relationships between them by taking into account their co-occurrence in textual corpora. Given a target entity, LRD discovers other entities closely related to the target effectively and efficiently. With respect to such relatedness, a measure of relation strength between entities is defined. LRD uses relation strength to enhance the vector space model, and uses the enhanced vector space model for query based IR on documents and clustering documents in order to discover complex relationships among terms and entities. Our experiments on a standard dataset for query based IR shows that our LRD method performed significantly better than traditional vector space model and other five standard statistical methods for vector expansion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Academia has followed the interest by companies in establishing industrial networks by studying aspects such as social interaction and contractual relationships. But what patterns underlie the emergence of industrial networks and what support should research provide for practitioners? Firstly, it seems that manufacturing is becoming a commodity rather than a unique capability, which accounts especially for low-technology approaches in downstream parts of the network, for example in assembly operations. Secondly, the increased tendency to specialize forces other parts of industrial networks to introduce advanced manufacturing technologies for niche markets. Thirdly, the capital market for investments in capacity and the trade in manufacturing as a commodity dominates resource allocation to a larger extent. Fourthly, there will be a continuous move toward more loosely connected entities forming manufacturing networks. More traditional concepts, like keiretsu and chaibol networks, do not sufficiently support this transition. Research should address these fundamental challenges to prepare for the industrial networks of 2020 and beyond.