947 resultados para Specifications


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the link between Plan Colombia and violence is explored. This paper exploits the difference in the success of the program among the different regions to identify the potential side effects on homicides and violent deaths. Results show no significant effects observed on homicides. On the other hand, evidence was found of increases in the number of violent deaths for women living in urban areas, and an opposite negative effect for men living in rural areas. These findings are consistent for different specifications of the model, the cut-off end of the program, and the classification of the regions’ criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a case study that explores the advantages that can be derived from the use of a design support system during the design of wastewater treatment plants (WWTP). With this objective in mind a simplified but plausible WWTP design case study has been generated with KBDS, a computer-based support system that maintains a historical record of the design process. The study shows how, by employing such a historical record, it is possible to: (1) rank different design proposals responding to a design problem; (2) study the influence of changing the weight of the arguments used in the selection of the most adequate proposal; (3) take advantage of keywords to assist the designer in the search of specific items within the historical records; (4) evaluate automatically the compliance of alternative design proposals with respect to the design objectives; (5) verify the validity of previous decisions after the modification of the current constraints or specifications; (6) re-use the design records when upgrading an existing WWTP or when designing similar facilities; (7) generate documentation of the decision making process; and (8) associate a variety of documents as annotations to any component in the design history. The paper also shows one possible future role of design support systems as they outgrow their current reactive role as repositories of historical information and start to proactively support the generation of new knowledge during the design process

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La experiencia en el uso de los servicios de mapas basados en la especificación Web Map Service (WMS) del Open Geospatial Consortium (OGC) ha demostrado que es necesario utilizar cachés de teselas para lograr un rendimiento aceptable en aplicaciones de difusión masiva, sin embargo no hay ningún mecanismo estándar para que los clientes de mapas aprovechen, a partir de la información proporcionada por el servidor de mapas, la disponibilidad de esta caché. A la espera de que la nueva recomendación WMTS se implante suficientemente, el mecanismo más extendido es la recomendación de perfil WMS-C de OsGeo. Para conseguir que la definición de mapas que contienen servicios WMSC sea lo más automática posible, se ha ampliado el servidor Geoserver para soportar un modelo de mapas de acuerdo con la recomendación WMC con algunas extensiones ad-hoc. La extensión desarrollada para Geoserver amplía su API REST para incluir soporte de WMC. De esta forma, cuando se registra una nueva configuración de mapa, mediante un documento WMC, en el que ciertas capas están cacheadas se procede automáticamente a la activación del cacheado mediante la extensión GeoWebCache. Para la utilización de las nuevas capacidades proporcionadas a Geoserver, se ha desarrollado un cliente de mapas que identifica la existencia de capas cacheadas y procede a utilizar, según convenga, los servicios cacheados y los servicios WMS tradicionales

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Não é novidade que o paradigma vigente baseia-se na Internet, em que cada vez mais aplicações mudam o seu modelo de negócio relativamente a licenciamento e manutenção, para passar a oferecer ao utilizador final uma aplicação mais acessível no que concerne a licenciamento e custos de manutenção, já que as aplicações se encontram distribuídas eliminando os custos de capitais e operacionais inerentes a uma arquitetura centralizada. Com a disseminação das Interfaces de Programação de Aplicações (Application Programming Interfaces – API) baseadas na Internet, os programadores passaram a poder desenvolver aplicações que utilizam funcionalidades disponibilizadas por terceiros, sem terem que as programar de raiz. Neste conceito, a API das aplicações Google® permitem a distribuição de aplicações a um mercado muito vasto e a integração com ferramentas de produtividade, sendo uma oportunidade para a difusão de ideias e conceitos. Este trabalho descreve o processo de conceção e implementação de uma plataforma, usando as tecnologias HTML5, Javascript, PHP e MySQL com integração com ®Google Apps, com o objetivo de permitir ao utilizador a preparação de orçamentos, desde o cálculo de preços de custo compostos, preparação dos preços de venda, elaboração do caderno de encargos e respetivo cronograma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In most climate simulations used by the Intergovernmental Panel on Climate Change 2007 fourth assessment report, stratospheric processes are only poorly represented. For example, climatological or simple specifications of time-varying ozone concentrations are imposed and the quasi-biennial oscillation (QBO) of equatorial stratospheric zonal wind is absent. Here we investigate the impact of an improved stratospheric representation using two sets of perturbed simulations with the Hadley Centre coupled ocean atmosphere model HadGEM1 with natural and anthropogenic forcings for the 1979–2003 period. In the first set of simulations, the usual zonal mean ozone climatology with superimposed trends is replaced with a time series of observed zonal mean ozone distributions that includes interannual variability associated with the solar cycle, QBO and volcanic eruptions. In addition to this, the second set of perturbed simulations includes a scheme in which the stratospheric zonal wind in the tropics is relaxed to appropriate zonal mean values obtained from the ERA-40 re-analysis, thus forcing a QBO. Both of these changes are applied strictly to the stratosphere only. The improved ozone field results in an improved simulation of the stepwise temperature transitions observed in the lower stratosphere in the aftermath of the two major recent volcanic eruptions. The contribution of the solar cycle signal in the ozone field to this improved representation of the stepwise cooling is discussed. The improved ozone field and also the QBO result in an improved simulation of observed trends, both globally and at tropical latitudes. The Eulerian upwelling in the lower stratosphere in the equatorial region is enhanced by the improved ozone field and is affected by the QBO relaxation, yet neither induces a significant change in the upwelling trend.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the summer of 1982, the ICLCUA CAFS Special Interest Group defined three subject areas for working party activity. These were: 1) interfaces with compilers and databases, 2) end-user language facilities and display methods, and 3) text-handling and office automation. The CAFS SIG convened one working party to address the first subject with the following terms of reference: 1) review facilities and map requirements onto them, 2) "Database or CAFS" or "Database on CAFS", 3) training needs for users to bridge to new techniques, and 4) repair specifications to cover gaps in software. The working party interpreted the topic broadly as the data processing professional's, rather than the end-user's, view of and relationship with CAFS. This report is the result of the working party's activities. The report content for good reasons exceeds the terms of reference in their strictest sense. For example, we examine QUERYMASTER, which is deemed to be an end-user tool by ICL, from both the DP and end-user perspectives. First, this is the only interface to CAFS in the current SV201. Secondly, it is necessary for the DP department to understand the end-user's interface to CAFS. Thirdly, the other subjects have not yet been addressed by other active working parties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In England, drama is embedded into the National Curriculum as a part of the programmes of study for the subject of English. This means that all children aged between 5 - 16 in state funded schools have an entitlement to be taught some aspects of the subject. While the manifestation of drama in primary schools is diverse, in a great many schools for students aged between 11 – 19, drama and theatre art is taught as a discrete subject in the same way that the visual arts and music are. Students may opt for public examination courses in the subject at ages 16 and 18. In order to satisfy the specifications laid down for such examinations many schools recognise the need for specialist teachers and indeed specialist teaching rooms and equipment. This chapter outlines how drama is taught in secondary schools in England (there being subtle variations in the education systems in the other countries that make up the United Kingdom) and the theories that underpin drama’s place in the curriculum as a subject in its own right and as a vehicle for delivering other aspects of the prescribed curriculum are discussed. The paper goes on to review the way in which drama is taught articulates with the requirements and current initiatives laid down by the government. Given this context, the chapter moves on to explore what specialist subject and pedagogical knowledge secondary school drama teachers need. Furthermore, consideration is made of the tensions that may be seen to exist between the way drama teachers perceive their own identity as subject specialists and the restrictions and demands placed upon them by the education system within which they work. An insight into the backgrounds of those who become drama teachers in England is provided and the reasons for choosing such a career and the expectations and concerns that underpin their training are identified and analysed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new approach is presented that simultaneously deals with Misreporting and Don't Know (DK) responses within a dichotomous-choice contingent valuation framework. Utilising a modification of the standard Bayesian Probit framework, a Gibbs with Metropolis-Hastings algorithm is used to estimate the posterior densities for the parameters of interest. Several model specifications are applied to two contingent valuation datasets: one on wolf management plans, and one on the US Fee Demonstration Program. We find that DKs are more likely to be from people who would be predicted to have positive utility for the bid. Therefore, a DK is more likely to be a YES than a NO. We also find evidence of misreporting, primarily in favour of the NO option. The inclusion of DK responses has an unpredictable impact on willingness-to-pay estimates, since it impacts differently on the results for the two datasets we examine. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents the method and findings of a Delphi expert survey to assess the impact of UK government farm animal welfare policy, form assurance schemes and major food retailer specifications on the welfare of animals on forms. Two case-study livestock production systems are considered, dairy and cage egg production. The method identifies how well the various standards perform in terms of their effects on a number of key farm animal welfare variables, and provides estimates of the impact of the three types of standard on the welfare of animals on forms, taking account of producer compliance. The study highlights that there remains considerable scope for government policy, together with form assurance schemes, to improve the welfare of form animals by introducing standards that address key factors affecting animal welfare and by increasing compliance of livestock producers. There is a need for more comprehensive, regular and random surveys of on-farm welfare to monitor compliance with welfare standards (legislation and welfare codes) and the welfare of farm animals over time, and a need to collect farm data on the costs of compliance with standards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article explores how data envelopment analysis (DEA), along with a smoothed bootstrap method, can be used in applied analysis to obtain more reliable efficiency rankings for farms. The main focus is the smoothed homogeneous bootstrap procedure introduced by Simar and Wilson (1998) to implement statistical inference for the original efficiency point estimates. Two main model specifications, constant and variable returns to scale, are investigated along with various choices regarding data aggregation. The coefficient of separation (CoS), a statistic that indicates the degree of statistical differentiation within the sample, is used to demonstrate the findings. The CoS suggests a substantive dependency of the results on the methodology and assumptions employed. Accordingly, some observations are made on how to conduct DEA in order to get more reliable efficiency rankings, depending on the purpose for which they are to be used. In addition, attention is drawn to the ability of the SLICE MODEL, implemented in GAMS, to enable researchers to overcome the computational burdens of conducting DEA (with bootstrapping).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When formulating least-cost poultry diets, ME concentration should be optimised by an iterative procedure, not entered as a fixed value. This iteration must calculate profit margins by taking into account the way in which feed intake and saleable outputs vary with ME concentration. In the case of broilers, adjustment of critical amino acid contents in direct proportion to ME concentration does not result in birds of equal fatness. To avoid an increase in fat deposition at higher energy levels, it is proposed that amino acid specifications should be adjusted in proportion to changes in the net energy supplied by the feed. A model is available which will both interpret responses to amino acids in laying trials and give economically optimal estimates of amino acid inputs for practical feed formulation. Flocks coming into lay and flocks nearing the end of the pullet year have bimodal distributions of rates of lay, with the result that calculations of requirement based on mean output will underestimate the optimal amino acid input for the flock. Chick diets containing surplus protein can lead to impaired utilisation of the first-limiting amino acid. This difficulty can be avoided by stating amino acid requirements as a proportion of the protein.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using mixed logit models to analyse choice data is common but requires ex ante specification of the functional forms of preference distributions. We make the case for greater use of bounded functional forms and propose the use of the Marginal Likelihood, calculated using Bayesian techniques, as a single measure of model performance across non nested mixed logit specifications. Using this measure leads to very different rankings of model specifications compared to alternative rule of thumb measures. The approach is illustrated using data from a choice experiment regarding GM food types which provides insights regarding the recent WTO dispute between the EU and the US, Canada and Argentina and whether labelling and trade regimes should be based on the production process or product composition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society