829 resultados para Two Approaches


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A eficiência econômica da bovinocultura leiteira está relacionada à utilização de animais que apresentem, concomitantemente, bom desempenho quanto à produção, reprodução, saúde e longevidade. Nisto, o índice de seleção configura-se como ferramenta importante ao aumento da lucratividade nesse sistema, visto que permite a seleção de reprodutores para várias características simultaneamente, considerando a relação entre elas bem como a relevância econômica das mesmas. Com a recente disponibilidade de dados genômicos tornou-se ainda possível expandir a abrangência e acurácia dos índices de seleção por meio do aumento do número e qualidade das informações consideradas. Nesse contexto, dois estudos foram desenvolvidos. No primeiro, o objetivo foi estimar parâmetros genéticos e valores genéticos (VG) para características relacionadas à produção e qualidade do leite incluindo-se a informação genômica na avaliação genética. Foram utilizadas medidas de idade ao primeiro parto (IPP), produção de leite (PROD), teor de gordura (GOR), proteína (PROT), lactose, caseína, escore de células somáticas (ECS) e perfil de ácidos graxos de 4.218 vacas bem como os genótipos de 755 vacas para 57.368 polimorfismos de nucleotídeo único (SNP). Os componentes de variância e VG foram obtidos por meio de um modelo misto animal, incluindo-se os efeitos de grupos de contemporâneas, ordem de lactação, dias em lactação e os efeitos aditivo genético, ambiente permanente e residual. Duas abordagens foram desenvolvidas: uma tradicional, na qual a matriz de relacionamentos é baseada no pedigree; e uma genômica, na qual esta matriz é construída combinando-se a informação de pedigree e dos SNP. As herdabilidades variaram de 0,07 a 0,39. As correlações genéticas entre PROD e os componentes do leite variaram entre -0,45 e -0,13 enquanto correlações altas e positivas foram estimadas entre GOR e os ácidos graxos. O uso da abordagem genômica não alterou as estimativas de parâmetros genéticos; contudo, houve aumento entre 1,5% e 6,8% na acurácia dos VG, à exceção de IPP, para a qual houve uma redução de 1,9%. No segundo estudo, o objetivo foi incorporar a informação genômica no desenvolvimento de índices econômicos de seleção. Neste, os VG para PROD, GOR, PROT, teor de ácidos graxos insaturados (INSAT), ECS e vida produtiva foram combinados em índices de seleção ponderados por valores econômicos estimados sob três cenários de pagamento: exclusivamente por volume de leite (PAG1); por volume e por componentes do leite (PAG2); por volume e componentes do leite incluindo INSAT (PAG3). Esses VG foram preditos a partir de fenótipos de 4.293 vacas e genótipos de 755 animais em um modelo multi-característica sob as abordagens tradicional e genômica. O uso da informação genômica influenciou os componentes de variância, VG e a resposta à seleção. Entretanto, as correlações de ranking entre as abordagens foram altas nos três cenários, com valores entre 0,91 e 0,99. Diferenças foram principalmente observadas entre PAG1 e os demais cenários, com correlações entre 0,67 e 0,88. A importância relativa das características e o perfil dos melhores animais foram sensíveis ao cenário de remuneração considerado. Assim, verificou-se como essencial a consideração dos valores econômicos das características na avaliação genética e decisões de seleção.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Uma imagem engloba informação que precisa ser organizada para interpretar e compreender seu conteúdo. Existem diversas técnicas computacionais para extrair a principal informação de uma imagem e podem ser divididas em três áreas: análise de cor, textura e forma. Uma das principais delas é a análise de forma, por descrever características de objetos baseadas em seus pontos fronteira. Propomos um método de caracterização de imagens, por meio da análise de forma, baseada nas propriedades espectrais do laplaciano em grafos. O procedimento construiu grafos G baseados nos pontos fronteira do objeto, cujas conexões entre vértices são determinadas por limiares T_l. A partir dos grafos obtêm-se a matriz de adjacência A e a matriz de graus D, as quais definem a matriz Laplaciana L=D -A. A decomposição espectral da matriz Laplaciana (autovalores) é investigada para descrever características das imagens. Duas abordagens são consideradas: a) Análise do vetor característico baseado em limiares e a histogramas, considera dois parâmetros o intervalo de classes IC_l e o limiar T_l; b) Análise do vetor característico baseado em vários limiares para autovalores fixos; os quais representam o segundo e último autovalor da matriz L. As técnicas foram testada em três coleções de imagens: sintéticas (Genéricas), parasitas intestinais (SADPI) e folhas de plantas (CNShape), cada uma destas com suas próprias características e desafios. Na avaliação dos resultados, empregamos o modelo de classificação support vector machine (SVM), o qual avalia nossas abordagens, determinando o índice de separação das categorias. A primeira abordagem obteve um acerto de 90 % com a coleção de imagens Genéricas, 88 % na coleção SADPI, e 72 % na coleção CNShape. Na segunda abordagem, obtém-se uma taxa de acerto de 97 % com a coleção de imagens Genéricas; 83 % para SADPI e 86 % no CNShape. Os resultados mostram que a classificação de imagens a partir do espectro do Laplaciano, consegue categorizá-las satisfatoriamente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As formulações baseadas na mecânica do contínuo, embora precisas até certo ponto, por vezes não podem ser utilizadas, ou não são conceitualmente corretas para o entendimento de fenômenos em escalas reduzidas. Estas limitações podem aparecer no estudo dos fenômenos tribológicos em escala nanométrica, que passam a necessitar de novos métodos experimentais, teóricos e computacionais que permitam explorar estes fenômenos com a resolução necessária. Simulações atomísticas são capazes de descrever fenômenos em pequena escala, porém, o número necessário de átomos modelados e, portanto, o custo computacional - geralmente torna-se bastante elevado. Por outro lado, os métodos de simulação associados à mecânica do contínuo são mais interessantes em relação ao custo computacional, mas não são precisos na escala atômica. A combinação entre essas duas abordagens pode, então, permitir uma compreensão mais realista dos fenômenos da tribologia. Neste trabalho, discutem-se os conceitos básicos e modelos de atrito em escala atômica e apresentam-se estudos, por meio de simulação numérica, para a análise e compreensão dos mecanismos de atrito e desgaste no contato entre materiais. O problema é abordado em diferentes escalas, e propõe-se uma abordagem conjunta entre a Mecânica do Contínuo e a Dinâmica Molecular. Para tanto, foram executadas simulações numéricas, com complexidade crescente, do contato entre superfícies, partindo-se de um primeiro modelo que simula o efeito de defeitos cristalinos no fenômeno de escorregamento puro, considerando a Dinâmica Molecular. Posteriormente, inseriu-se, nos modelos da mecânica do contínuo, considerações sobre o fenômeno de adesão. A validação dos resultados é feita pela comparação entre as duas abordagens e com a literatura.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The article exposes the meaning of profession for the journalists and how it affects the work-family reconciliation. The paper focuses on the daily press industry in eastern Spain. The information is collected by 38 biographical in-depth interviews with female journalists. The results show two approaches to define the profession: the personnel and the group one. The first approach shows the personal accomplishment. The second one shows the stereotype of journalist. Both approaches demand dedication. In addition, they are linked to other structural factors of the job (ex-. schedules) that define different labor personal situations (also according to professional and vital trajectories of the journalists). That way, the meaning of profession affects the decision making about the reconciliation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article analyzes the appropriateness of a text summarization system, COMPENDIUM, for generating abstracts of biomedical papers. Two approaches are suggested: an extractive (COMPENDIUM E), which only selects and extracts the most relevant sentences of the documents, and an abstractive-oriented one (COMPENDIUM E–A), thus facing also the challenge of abstractive summarization. This novel strategy combines extractive information, with some pieces of information of the article that have been previously compressed or fused. Specifically, in this article, we want to study: i) whether COMPENDIUM produces good summaries in the biomedical domain; ii) which summarization approach is more suitable; and iii) the opinion of real users towards automatic summaries. Therefore, two types of evaluation were performed: quantitative and qualitative, for evaluating both the information contained in the summaries, as well as the user satisfaction. Results show that extractive and abstractive-oriented summaries perform similarly as far as the information they contain, so both approaches are able to keep the relevant information of the source documents, but the latter is more appropriate from a human perspective, when a user satisfaction assessment is carried out. This also confirms the suitability of our suggested approach for generating summaries following an abstractive-oriented paradigm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

From the Introduction. The present contribution is an attempt to raise awareness between the 'trenches' by juxtaposing the two approaches to subsidiarity. Subsequently, I shall set out why, in economics, subsidiarity is embraced as a key principle in the design and working of the Union and how a functional subsidiarity test can be derived from this thinking. Throughout the paper, a range of illustrations and examples is provided in an attempt to show the practical applicability of a subsidiarity test. This does not mean, of course, that the application of the test can automatically "solve" all debates on whether subsidiarity is (not) violated. What it does mean, however, is that a careful methodology can be a significant help to e.g. national parliaments and the Brussels circuit, in particular, to discourage careless politicisation as much as possible and to render assessments of subsidiarity comparable throughout the Union. The latter virtue should be of interest to national parliaments in cooperating, within just six weeks, about a common stance in the case of a suspected violation of the principle. The structure of the paper is as follows. Section 2 gives a flavour of very different approaches and appreciation of the subsidiarity principle in European law and in the economics of multi-tier government. Section 3 elaborates on the economics of multi-tier government as a special instance of cost / benefit analysis of (de)centralisation in the three public economic functions of any government system. This culminates in a five-steps subsidiarity test and a brief discussion about its proper and improper application. Section 4 applies the test in a non-technical fashion to a range of issues of the "efficiency function" (i.e. allocation and markets) of the EU. After showing that the functional logic of subsidiarity may require liberalisation to be accompanied by various degrees of centralisation, a number of fairly detailed illustrations of how to deal with subsidiarity in the EU is provided. One illustration is about how the subsidiarity logic is misused by protagonists (labour in the internal market). A slightly different but frequently encountered aspect consists in the refusal to recognize that the EU (that is, some form of centralisation) offers a better solution than 25 national ones. A third range of issues, where the functional logic of subsidiarity could be useful, emerges when the boundaries of national competences are shifting due to more intense cross-border flows and developments. Other subsections are devoted to Union public goods and to the question whether the subsidiarity test might trace instances of EU decentralisation: a partial or complete shift of a policy or regulation to Member States. The paper refrains from an analysis of the application of the subsidiarity test to the other two public functions, namely, equity and macro-economic stabilisation.2 Section 5 argues that the use of a well-developed methodology of a functional subsidiarity test would be most useful for the national parliaments and even more so for their cooperation in case of a suspected violation of subsidiarity. Section 6 concludes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study gives an overview of the theoretical foundations, empirical procedures and derived results of the literature identifying determinants of land prices. Special attention is given to the effects of different government support policies on land prices. Since almost all empirical studies on the determination of land prices refer either to the net present value method or the hedonic pricing approach as a theoretical basis, a short review of these models is provided. While the two approaches have different theoretical bases, their empirical implementation converges. Empirical studies use a broad range of variables to explain land values and we systematise those into six categories. In order to investigate the influence of different measures of government support on land prices, a meta-regression analysis is carried out. Our results reveal a significantly higher rate of capitalisation for decoupled direct payments and a significantly lower rate of capitalisation for agri-environmental payments, as compared to the rest of government support. Furthermore, the results show that taking theoretically consistent land rents (returns to land) and including non-agricultural variables like urban pressure in the regression implies lower elasticities of capitalisation. In addition, we find a significant influence of the land type, the data type and estimation techniques on the capitalisation rate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We compare eight pollen records reflecting climatic and environmental change from the tropical Andes. Our analysis focuses on the last 50 ka, with particular emphasis on the Pleistocene to Holocene transition. We explore ecological grouping and downcore ordination results as two approaches for extracting environmental variability from pollen records. We also use the records of aquatic and shoreline vegetation as markers for lake level fluctuations, and precipitation change. Our analysis focuses on the signature of millennial-scale variability in the tropical Andes, in particular, Heinrich stadials and Greenland interstadials. We identify rapid responses of the tropical vegetation to this climate variability, and relate differences between sites to moisture sources and site sensitivity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (D.M.A.)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We employ two different methods, based on belief propagation and TAP,for decoding corrupted messages encoded by employing Sourlas's method, where the code word comprises products of K bits selected randomly from the original message. We show that the equations obtained by the two approaches are similar and provide the same solution as the one obtained by the replica approach in some cases K=2. However, we also show that for K>=3 and unbiased messages the iterative solution is sensitive to the initial conditions and is likely to provide erroneous solutions; and that it is generally beneficial to use Nishimori's temperature, especially in the case of biased messages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cellular mobile radio systems will be of increasing importance in the future. This thesis describes research work concerned with the teletraffic capacity and the canputer control requirements of such systems. The work involves theoretical analysis and experimental investigations using digital computer simulation. New formulas are derived for the congestion in single-cell systems in which there are both land-to-mobile and mobile-to-mobile calls and in which mobile-to-mobile calls go via the base station. Two approaches are used, the first yields modified forms of the familiar Erlang and Engset formulas, while the second gives more complicated but more accurate formulas. The results of computer simulations to establish the accuracy of the formulas are described. New teletraffic formulas are also derived for the congestion in multi -cell systems. Fixed, dynamic and hybrid channel assignments are considered. The formulas agree with previously published simulation results. Simulation programs are described for the evaluation of the speech traffic of mobiles and for the investigation of a possible computer network for the control of the speech traffic. The programs were developed according to the structured progranming approach leading to programs of modular construction. Two simulation methods are used for the speech traffic: the roulette method and the time-true method. The first is economical but has some restriction, while the second is expensive but gives comprehensive answers. The proposed control network operates at three hierarchical levels performing various control functions which include: the setting-up and clearing-down of calls, the hand-over of calls between cells and the address-changing of mobiles travelling between cities. The results demonstrate the feasibility of the control netwvork and indicate that small mini -computers inter-connected via voice grade data channels would be capable of providing satisfactory control

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To reduce global biodiversity loss, there is an urgent need to determine the most efficient allocation of conservation resources. Recently, there has been a growing trend for many governments to supplement public ownership and management of reserves with incentive programs for conservation on private land. This raises important questions, such as the extent to which private land conservation can improve conservation outcomes, and how it should be mixed with more traditional public land conservation. We address these questions, using a general framework for modelling environmental policies and a case study examining the conservation of endangered native grasslands to the west of Melbourne, Australia. Specifically, we examine three policies that involve i) spending all resources on creating public conservation areas; ii) spending all resources on an ongoing incentive program where private landholders are paid to manage vegetation on their property with 5-year contracts; and iii) splitting resources between these two approaches. The performance of each strategy is quantified with a vegetation condition change model that predicts future changes in grassland quality. Of the policies tested, no one policy was always best and policy performance depended on the objectives of those enacting the policy. Although policies to promote conservation on private land are proposed and implemented in many areas, they are rarely evaluated in terms of their ecological consequences. This work demonstrates a general method for evaluating environmental policies and highlights the utility of a model which combines ecological and socioeconomic processes.