843 resultados para Sensor of electric measures
Resumo:
Perhaps the most characteristic feature of our timesis that economic development has become the goal and ambition of people. The needs which this desire creates are immense they are of course urgent everywhere and they cannot be postponded. Consequently there was a frantic search for formulae of rapid economic development. It was claimed that agrarian reform is the indispensable condition for the development of productive forces and industrialization of the state.A key element in the land reform policy is the provision for ownership of land .Measures taken include redistribution of large estates ,assistance to tenants or labourers to acquire holdings and settlement schemes to establish new farming units on reclaimed or developed lands.In this thesis an attempt is made to evaluate the impact of these reforms on the agrarian structure in general and the scheduled caste in particular.
Resumo:
A study on the different aspects of spiny lobster fishery of south west coast of India with respect to the factors relevant to production, including conservation and management measures for putting this fishery on sound basis needs no emphasis. There are some aspects of this fishery which have not been sufficiently inquired into and some others which have been touched upon intermittantly and in a languid way. The attempt here is to throw light on these aspects from a production point of view. Emphasis is on harvest technology and the conservation and management measures and it is proposed to make a critical review of such measures in vogue in other lobster fishing countries and discuss about suitable methods for this fishery
Resumo:
In this paper, we study the relationship between the failure rate and the mean residual life of doubly truncated random variables. Accordingly, we develop characterizations for exponential, Pareto 11 and beta distributions. Further, we generalize the identities for fire Pearson and the exponential family of distributions given respectively in Nair and Sankaran (1991) and Consul (1995). Applications of these measures in file context of lengthbiased models are also explored
Resumo:
Energy production from biomass and the conservation of ecologically valuable grassland habitats are two important issues of agriculture today. The combination of a bioenergy production, which minimises environmental impacts and competition with food production for land with a conversion of semi-natural grasslands through new utilization alternatives for the biomass, led to the development of the IFBB process. Its basic principle is the separation of biomass into a liquid fraction (press fluid, PF) for the production of electric and thermal energy after anaerobic digestion to biogas and a solid fraction (press cake, PC) for the production of thermal energy through combustion. This study was undertaken to explore mass and energy flows as well as quality aspects of energy carriers within the IFBB process and determine their dependency on biomass-related and technical parameters. Two experiments were conducted, in which biomass from semi-natural grassland was conserved as silage and subjected to a hydrothermal conditioning and a subsequent mechanical dehydration with a screw press. Methane yield of the PF and the untreated silage was determined in anaerobic digestion experiments in batch fermenters at 37°C with a fermentation time of 13-15 and 27-35 days for the PF and the silage, respectively. Concentrations of dry matter (DM), ash, crude protein (CP), crude fibre (CF), ether extract (EE), neutral detergent fibre (NDF), acid detergent fibre (ADF), acid detergent ligning (ADL) and elements (K, Mg, Ca, Cl, N, S, P, C, H, N) were determined in the untreated biomass and the PC. Higher heating value (HHV) and ash softening temperature (AST) were calculated based on elemental concentration. Chemical composition of the PF and mass flows of all plant compounds into the PF were calculated. In the first experiment, biomass from five different semi-natural grassland swards (Arrhenaterion I and II, Caricion fuscae, Filipendulion ulmariae, Polygono-Trisetion) was harvested at one late sampling (19 July or 31 August) and ensiled. Each silage was subjected to three different temperature treatments (5°C, 60°C, 80°C) during hydrothermal conditioning. Based on observed methane yields and HHV as energy output parameters as well as literature-based and observed energy input parameters, energy and green house gas (GHG) balances were calculated for IFBB and two reference conversion processes, whole-crop digestion of untreated silage (WCD) and combustion of hay (CH). In the second experiment, biomass from one single semi-natural grassland sward (Arrhenaterion) was harvested at eight consecutive dates (27/04, 02/05, 09/05, 16/05, 24/05, 31/05, 11/06, 21/06) and ensiled. Each silage was subjected to six different treatments (no hydrothermal conditioning and hydrothermal conditioning at 10°C, 30°C, 50°C, 70°C, 90°C). Energy balance was calculated for IFBB and WCD. Multiple regression models were developed to predict mass flows, concentrations of elements in the PC, concentration of organic compounds in the PF and energy conversion efficiency of the IFBB process from temperature of hydrothermal conditioning as well as NDF and DM concentration in the silage. Results showed a relative reduction of ash and all elements detrimental for combustion in the PC compared to the untreated biomass of 20-90%. Reduction was highest for K and Cl and lowest for N. HHV of PC and untreated biomass were in a comparable range (17.8-19.5 MJ kg-1 DM), but AST of PC was higher (1156-1254°C). Methane yields of PF were higher compared to those of WCD when the biomass was harvested late (end of May and later) and in a comparable range when the biomass was harvested early and ranged from 332 to 458 LN kg-1 VS. Regarding energy and GHG balances, IFBB, with a net energy yield of 11.9-14.1 MWh ha-1, a conversion efficiency of 0.43-0.51, and GHG mitigation of 3.6-4.4 t CO2eq ha-1, performed better than WCD, but worse than CH. WCD produces thermal and electric energy with low efficiency, CH produces only thermal energy with a low quality solid fuel with high efficiency, IFBB produces thermal and electric energy with a solid fuel of high quality with medium efficiency. Regression models were able to predict target parameters with high accuracy (R2=0.70-0.99). The influence of increasing temperature of hydrothermal conditioning was an increase of mass flows, a decrease of element concentrations in the PC and a differing effect on energy conversion efficiency. The influence of increasing NDF concentration of the silage was a differing effect on mass flows, a decrease of element concentrations in the PC and an increase of energy conversion efficiency. The influence of increasing DM concentration of the silage was a decrease of mass flows, an increase of element concentrations in the PC and an increase of energy conversion efficiency. Based on the models an optimised IFBB process would be obtained with a medium temperature of hydrothermal conditioning (50°C), high NDF concentrations in the silage and medium DM concentrations of the silage.
Predicting sense of community and participation by applying machine learning to open government data
Resumo:
Community capacity is used to monitor socio-economic development. It is composed of a number of dimensions, which can be measured to understand the possible issues in the implementation of a policy or the outcome of a project targeting a community. Measuring community capacity dimensions is usually expensive and time consuming, requiring locally organised surveys. Therefore, we investigate a technique to estimate them by applying the Random Forests algorithm on secondary open government data. This research focuses on the prediction of measures for two dimensions: sense of community and participation. The most important variables for this prediction were determined. The variables included in the datasets used to train the predictive models complied with two criteria: nationwide availability; sufficiently fine-grained geographic breakdown, i.e. neighbourhood level. The models explained 77% of the sense of community measures and 63% of participation. Due to the low geographic detail of the outcome measures available, further research is required to apply the predictive models to a neighbourhood level. The variables that were found to be more determinant for prediction were only partially in agreement with the factors that, according to the social science literature consulted, are the most influential for sense of community and participation. This finding should be further investigated from a social science perspective, in order to be understood in depth.
Resumo:
En este proyecto analizaremos como las organizaciones se relacionan con el medio y marketing. La idea es determinar cuáles son los métodos de análisis de las comunidades de clientes mediante la relación estratégica comunitaria y el marketing. Por medio del mercadeo se puede conocer el entorno y determinar qué métodos de análisis utilizar para conocer a la comunidad de clientes. Las personas de mercadeo se ocupan de todo lo que ocurre en el entorno, de estar al tanto para saber cuándo hay oportunidades que puedan ser provechosas para la organización o por otro lado cuando hay amenazas de las que debe tener cuidado. Dependiendo del entorno, la organización diseña sus actividades de mercadeo enfocadas en satisfacer las necesidades del consumidor. Las actividades del consumidor se conceptualizan en producto, precio, promoción y plaza que se definen y diseñan basados en la comunidad en la que este inmersa la organización. Es importante buscar información confiable sobre el grupo objetivo al cual se le va ofrecer el producto o servicio, ya que toca analizarlos y comprender a estas personas para diseñar una buena oferta que satisfaga sus necesidades y deseos. Esta persona que recibe el producto o servicio por parte de la organización es el cliente. Los clientes son las personas que llegan a una organización en búsqueda de satisfacer necesidades a través de los bienes y servicios que las empresas ofrecen. Es esencial determinar que los clientes viven en comunidad, es decir comparten ideas por la comunicación tan estrecha que tienen y viven en conjunto bajo las mismas costumbres. Debido a estos es que hoy en día, los consumidores se conglomeran en comunidades de clientes, y para saberles llegar a estos clientes, toca analizarlos por medio de diversos métodos. El uso de las estrategias comunitarias es necesario ya que por medio del marketing se analiza el entorno y se buscan los métodos para analizar a la comunidad de clientes, que comparten características y se analizan en conjunto no por individuo. Es necesario identificar los métodos para relacionarse con la comunidad de clientes, para poder acercarnos a estos y conocerlos bien, saber sus necesidades y deseos y ofrecerles productos y servicios de acuerdo a éstos. En la actualidad estos métodos no son muy comunes ni conocidos, es por esto que nuestro propósito es indagar e identificar estos métodos para saber analizar a las comunidades. En este proyecto se utilizara una metodología de estudio tipo teórico-conceptual buscando las fuentes de información necesarias para llevar a cabo nuestra investigación. Se considera trabajar con El Grupo de Investigación en Perdurabilidad Empresarial y se escogió la línea de gerencia ya que permite entrar en la sociedad del conocimiento, siendo capaces de identificar oportunidades gerenciales en el entorno. Es interesante investigar sobre estos métodos, ya que los clientes esperan un servicio excelente, atento y que se preocupe por ellos y sus necesidades.
Resumo:
La medición de la desigualdad de oportunidades con las bases de PISA implican varias limitaciones: (i) la muestra sólo representa una fracción limitada de las cohortes de jóvenes de 15 años en los países en desarrollo y (ii) estas fracciones no son uniformes entre países ni entre periodos. Lo anterior genera dudas sobre la confiabilidad de estas mediciones cuando se usan para comparaciones internacionales: mayor equidad puede ser resultado de una muestra más restringida y más homogénea. A diferencia de enfoques previos basados en reconstrucción de las muestras, el enfoque del documento consiste en proveer un índice bidimensional que incluye logro y acceso como dimensiones del índice. Se utilizan varios métodos de agregación y se observan cambios considerables en los rankings de (in) equidad de oportunidades cuando solo se observa el logro y cuando se observan ambas dimensiones en las pruebas de PISA 2006/2009. Finalmente se propone una generalización del enfoque permitiendo otras dimensiones adicionales y otros pesos utilizados en la agregación.
Resumo:
The control and prediction of wastewater treatment plants poses an important goal: to avoid breaking the environmental balance by always keeping the system in stable operating conditions. It is known that qualitative information — coming from microscopic examinations and subjective remarks — has a deep influence on the activated sludge process. In particular, on the total amount of effluent suspended solids, one of the measures of overall plant performance. The search for an input–output model of this variable and the prediction of sudden increases (bulking episodes) is thus a central concern to ensure the fulfillment of current discharge limitations. Unfortunately, the strong interrelation between variables, their heterogeneity and the very high amount of missing information makes the use of traditional techniques difficult, or even impossible. Through the combined use of several methods — rough set theory and artificial neural networks, mainly — reasonable prediction models are found, which also serve to show the different importance of variables and provide insight into the process dynamics
Resumo:
Urban regeneration programmes in the UK over the past 20 years have increasingly focused on attracting investors, middle-class shoppers and visitors by transforming places and creating new consumption spaces. Ensuring that places are safe and are seen to be safe has taken on greater salience as these flows of income are easily disrupted by changing perceptions of fear and the threat of crime. At the same time, new technologies and policing strategies and tactics have been adopted in a number of regeneration areas which seek to establish control over these new urban spaces. Policing space is increasingly about controlling human actions through design, surveillance technologies and codes of conduct and enforcement. Regeneration agencies and the police now work in partnerships to develop their strategies. At its most extreme, this can lead to the creation of zero-tolerance, or what Smith terms 'revanchist', measures aimed at particular social groups in an effort to sanitise space in the interests of capital accumulation. This paper, drawing on an examination of regeneration practices and processes in one of the UK's fastest-growing urban areas, Reading in Berkshire, assesses policing strategies and tactics in the wake of a major regeneration programme. It documents and discusses the discourses of regeneration that have developed in the town and the ways in which new urban spaces have been secured. It argues that, whilst security concerns have become embedded in institutional discourses and practices, the implementation of security measures has been mediated, in part, by the local socio-political relations in and through which they have been developed.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.
Resumo:
ES-62 is a phosphorylcholine-containing glycoprotein secreted by filarial nematodes. This molecule has been shown to reduce the severity of inflammation in collagen-induced arthritis (CIA) in mice, a model of rheumatoid arthritis, via down-regulation of anti-collagen type 1 immune responses. Malaria parasites induce a pro-inflammatory host immune response and many of the symptoms of malaria are immune system-mediated. Therefore we have asked whether the immunomodulatory properties of ES-62 can down-regulate the severity of malaria infection in BALB/c mice infected with Plasmodium chabaudi. We have found that ES-62 has no significant effect on the course of P. chabaudi parasitaemia, and does not significantly affect any of the measures of malaria-induced pathology taken throughout infection.
Resumo:
This study investigated, for the D-2 dopamine receptor, the relation between the ability of agonists and inverse agonists to stabilise different states of the receptor and their relative efficacies. K-i values for agonists were determined in competition, versus the binding of the antagonist [H-3]spiperone. Competition data were fitted best by a two-binding site model (with the exception of bromocriptine, for which a one-binding site model provided the best fit) and agonist affinities for the higher (K-h) (G protein-coupled) and lower affinity (K-l) (G protein-uncoupled) sites determined. Ki values for agonists were also determined in competition versus the binding of the agonist [H-3]N-propylnorapomorphine (NPA) to provide a second estimate of K-h,. Maximal agonist effects (E-max) and their potencies (EC50) were determined from concentration-response curves for agonist stimulation of guanosine-5'-O-(3-[S-32] thiotriphosphate) ([S-35]GTPgammaS) binding. The ability of agonists to stabilise the G protein-coupled state of the receptor (K-l/K-h, determined from ligand-binding assays) did not correlate with either of two measures of relative efficacy (relative E-max, Kl/EC50) of agonists determined in [S-35]GTPgammaS-binding assays, when the data for all of the compounds tested were analysed For a subset of compounds, however, there was a relation between K-l/K-h and E-max.. Competition-binding data versus [H-3]spiperone and [H-3]NPA for a range of inverse agonists were fitted best by a one-binding site model. K-i values for the inverse agonists tested were slightly lower in competition versus [H-3]NPA compared to [H-3]spiperone. These data do not provide support for the idea that inverse agonists act by binding preferentially to the ground state of the receptor. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Geological carbon dioxide storage (CCS) has the potential to make a significant contribution to the decarbonisation of the UK. Amid concerns over maintaining security, and hence diversity, of supply, CCS could allow the continued use of coal, oil and gas whilst avoiding the CO2 emissions currently associated with fossil fuel use. This project has explored some of the geological, environmental, technical, economic and social implications of this technology. The UK is well placed to exploit CCS with a large offshore storage capacity, both in disused oil and gas fields and saline aquifers. This capacity should be sufficient to store CO2 from the power sector (at current levels) for a least one century, using well understood and therefore likely to be lower-risk, depleted hydrocarbon fields and contained parts of aquifers. It is very difficult to produce reliable estimates of the (potentially much larger) storage capacity of the less well understood geological reservoirs such as non-confined parts of aquifers. With the majority of its large coal fired power stations due to be retired during the next 15 to 20 years, the UK is at a natural decision point with respect to the future of power generation from coal; the existence of both national reserves and the infrastructure for receiving imported coal makes clean coal technology a realistic option. The notion of CCS as a ‘bridging’ or ‘stop-gap’ technology (i.e. whilst we develop ‘genuinely’ sustainable renewable energy technologies) needs to be examined somewhat critically, especially given the scale of global coal reserves. If CCS plant is built, then it is likely that technological innovation will bring down the costs of CO2 capture, such that it could become increasingly attractive. As with any capitalintensive option, there is a danger of becoming ‘locked-in’ to a CCS system. The costs of CCS in our model for UK power stations in the East Midlands and Yorkshire to reservoirs in the North Sea are between £25 and £60 per tonne of CO2 captured, transported and stored. This is between about 2 and 4 times the current traded price of a tonne of CO2 in the EU Emissions Trading Scheme. In addition to the technical and economic requirements of the CCS technology, it should also be socially and environmentally acceptable. Our research has shown that, given an acceptance of the severity and urgency of addressing climate change, CCS is viewed favourably by members of the public, provided it is adopted within a portfolio of other measures. The most commonly voiced concern from the public is that of leakage and this remains perhaps the greatest uncertainty with CCS. It is not possible to make general statements concerning storage security; assessments must be site specific. The impacts of any potential leakage are also somewhat uncertain but should be balanced against the deleterious effects of increased acidification in the oceans due to uptake of elevated atmospheric CO2 that have already been observed. Provided adequate long term monitoring can be ensured, any leakage of CO2 from a storage site is likely to have minimal localised impacts as long as leaks are rapidly repaired. A regulatory framework for CCS will need to include risk assessment of potential environmental and health and safety impacts, accounting and monitoring and liability for the long term. In summary, although there remain uncertainties to be resolved through research and demonstration projects, our assessment demonstrates that CCS holds great potential for significant cuts in CO2 emissions as we develop long term alternatives to fossil fuel use. CCS can contribute to reducing emissions of CO2 into the atmosphere in the near term (i.e. peak-shaving the future atmospheric concentration of CO2), with the potential to continue to deliver significant CO2 reductions over the long term.