787 resultados para Gradient-based approaches
Resumo:
Farming systems research is a multi-disciplinary holistic approach to solve the problems of small farms. Small and marginal farmers are the core of the Indian rural economy Constituting 0.80 of the total farming community but possessing only 0.36 of the total operational land. The declining trend of per capita land availability poses a serious challenge to the sustainability and profitability of farming. Under such conditions, it is appropriate to integrate land-based enterprises such as dairy, fishery, poultry, duckery, apiary, field and horticultural cropping within the farm, with the objective of generating adequate income and employment for these small and marginal farmers Under a set of farm constraints and varying levels of resource availability and Opportunity. The integration of different farm enterprises can be achieved with the help of a linear programming model. For the current review, integrated farming systems models were developed, by Way Of illustration, for the marginal, small, medium and large farms of eastern India using linear programming. Risk analyses were carried out for different levels of income and enterprise combinations. The fishery enterprise was shown to be less risk-prone whereas the crop enterprise involved greater risk. In general, the degree of risk increased with the increasing level of income. With increase in farm income and risk level, the resource use efficiency increased. Medium and large farms proved to be more profitable than small and marginal farms with higher level of resource use efficiency and return per Indian rupee (Rs) invested. Among the different enterprises of integrated farming systems, a chain of interaction and resource flow was observed. In order to make fanning profitable and improve resource use efficiency at the farm level, the synergy among interacting components of farming systems should be exploited. In the process of technology generation, transfer and other developmental efforts at the farm level (contrary to the discipline and commodity-based approaches which have a tendency to be piecemeal and in isolation), it is desirable to place a whole-farm scenario before the farmers to enhance their farm income, thereby motivating them towards more efficient and sustainable fanning.
Resumo:
Most newly sequenced proteins are likely to adopt a similar structure to one which has already been experimentally determined. For this reason, the most successful approaches to protein structure prediction have been template-based methods. Such prediction methods attempt to identify and model the folds of unknown structures by aligning the target sequences to a set of representative template structures within a fold library. In this chapter, I discuss the development of template-based approaches to fold prediction, from the traditional techniques to the recent state-of-the-art methods. I also discuss the recent development of structural annotation databases, which contain models built by aligning the sequences from entire proteomes against known structures. Finally, I run through a practical step-by-step guide for aligning target sequences to known structures and contemplate the future direction of template-based structure prediction.
Resumo:
This paper presents our experience with combining statistical principles and participatory methods to generate national statistics. The methodology was developed in Malawi during 1999–2002. We demonstrate that if PRA is combined with statistical principles (including probability-based sampling and standardization), it can produce total population statistics and estimates of the proportion of households with certain characteristics (e.g., poverty). It can also provide quantitative data on complex issues of national importance such as poverty targeting. This approach is distinct from previous PRA-based approaches, which generate numbers at community level but only provide qualitative information at national level.
Resumo:
Ancient DNA (aDNA) research has long depended on the power of PCR to amplify trace amounts of surviving genetic material from preserved specimens. While PCR permits specific loci to be targeted and amplified, in many ways it can be intrinsically unsuited to damaged and degraded aDNA templates. PCR amplification of aDNA can produce highly-skewed distributions with significant contributions from miscoding lesion damage and non-authentic sequence artefacts. As traditional PCR-based approaches have been unable to fully resolve the molecular nature of aDNA damage over many years, we have developed a novel single primer extension (SPEX)-based approach to generate more accurate sequence information. SPEX targets selected template strands at defined loci and can generate a quantifiable redundancy of coverage; providing new insights into the molecular nature of aDNA damage and fragmentation. SPEX sequence data reveals inherent limitations in both traditional and metagenomic PCR-based approaches to aDNA, which can make current damage analyses and correct genotyping of ancient specimens problematic. In contrast to previous aDNA studies, SPEX provides strong quantitative evidence that C U-type base modifications are the sole cause of authentic endogenous damage-derived miscoding lesions. This new approach could allow ancient specimens to be genotyped with unprecedented accuracy.
Resumo:
Many techniques are currently used for motion estimation. In the block-based approaches the most common procedure applied is the block-matching based on various algorithms. To refine the motion estimates resulting from the full search or any coarse search algorithm, one can find few applications of Kalman filtering, mainly in the intraframe scheme. The Kalman filtering technique applicability for block-based motion estimation is rather limited due to discontinuities in the dynamic behaviour of the motion vectors. Therefore, we propose an application of the concept of the filtering by approximated densities (FAD). The FAD, originally introduced to alleviate limitations due to conventional Kalman modelling, is applied to interframe block-motion estimation. This application uses a simple form of FAD involving statistical characteristics of multi-modal distributions up to second order.
Resumo:
CVD still represent the greatest cause of death and disease burden in Europe and there remains uncertainty whether or not diets rich in milk and/or dairy products affect CVD risk. This paper reviews current evidence on this from prospective studies and the role of serum lipids and blood pressure as markers of CVD risk with such diets. Also the potential of animal nutrition-based approaches aimed at reducing CVD risk from consumption of milk and dairy products is outlined. Briefly, the evidence from prospective studies indicates that increased consumption of milk does not result in increased CVD risk and may give some long-term benefits, although few studies relate specifically to cheese and butter and more information on the relationship between milk/dairy product consumption and dementia is needed. Recent data suggest that the SFA in dairy products may be less of a risk factor than previously thought; although this is based on serum cholesterol responses which taken in isolation may be misleading. Milk and some dairy products have counterbalancing effects by reducing blood pressure and possibly BMI control. Despite this, animal nutrition strategies to replace some SFA in milk with cis-MUFA or cis-PUFA are extensive and intuitively beneficial, although this remains largely unproven, especially for milk. There is an urgent need for robust intervention studies to evaluate such milk-fat modifications using holistic markers of CVD risk including central arterial stiffness.
Resumo:
Armed with the ‘equity’ and ‘conservation’ arguments that have a deep resonance with farming communities, developing countries are crafting a range of measures designed to protect farmers’ access to innovations, reward their contributions to the conservation and enhancement of plant genetic resources and provide incentives for sustained on-farm conservation. These measures range from the commericialization of farmers’ varieties to the conferment of a set of legally enforceable rights on farming communities – the exercise of which is expected to provide economic rewards to those responsible for on-farm conservation and innovation. The rights-based approach has been the cornerstone of legislative provision for implementing farmers’ rights in most developing countries. In drawing up these measures, developing countries do not appear to have systematically examined or provided for the substantial institutional capacity required for the effective implementation of farmers’ rights provisions. The lack of institutional capacity threatens to undermine any prospect of serious implementation of these provisions. More importantly, the expectation that significant incentives for on-farm conservation and innovation will flow from these ‘rights’ may be based on a flawed understanding of the economics of intellectual property rights. While farmers’ rights may provide only limited rewards for conservation, they may still have the effect of diluting the incentives for innovative institutional breeding programs – with the private sector increasingly relying on non-IPR instruments to profit from innovation. The focus on a rights-based approach may also draw attention away from alternative stewardship-based approaches to the realization of farmers’ rights objectives.
Resumo:
The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.
Landscape, regional and global estimates of nitrogen flux from land to sea: errors and uncertainties
Resumo:
Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.
Resumo:
Coupling a review of previous studies on the acquisition of grammatical aspects undertaken from contrasting paradigmatic views of second language acquisition (SLA) with new experimental data from L2 Portuguese, the present study contributes to this specific literature as well as general debates in L2 epistemology. We tested 31 adult English learners of L2 Portuguese across three experiments, examining the extent to which they had acquired the syntax and (subtle) semantics of grammatical aspect. Demonstrating that many individuals acquired target knowledge of what we contend is a poverty-of-the-stimulus semantic entailment related to the checking of aspectual features encoded in Portuguese preterit and imperfect morphology, namely, a [±accidental] distinction that obtains in a restricted subset of contexts, we conclude that UG-based approaches to SLA are in a better position to tap and gauge underlying morphosyntactic competence, since based on independent theoretical linguistic descriptions, they make falsifiable predictions that are amenable to empirical scrutiny, seek to describe and explain beyond performance, and can account for L2 convergence on poverty-of-the-stimulus knowledge as well as L2 variability/optionality.
Resumo:
Despite strong prospective epidemiology and mechanistic evidence for the benefits of certain micronutrients in preventing CVD, neutral and negative outcomes from secondary intervention trials have undermined the efficacy of supplemental nutrition in preventing CVD. In contrast, evidence for the positive impact of specific diets in CVD prevention, such as the Dietary Approaches to Stop Hypertension (DASH) diet, has focused attention on the potential benefits of whole diets and specific dietary patterns. These patterns have been scored on the basis of current guidelines for the prevention of CVD, to provide a quantitative evaluation of the relationship between diet and disease. Using this approach, large prospective studies have reported reductions in CVD risk ranging from 10 to 60% in groups whose diets can be variously classified as 'Healthy', 'Prudent', Mediterranean' or 'DASH compliant'. Evaluation of the relationship between dietary score and risk biomarkers has also been informative with respect to underlying mechanisms. However, although this analysis may appear to validate whole-diet approaches to disease prevention, it must be remembered that the classification of dietary scores is based on current understanding of diet-disease relationships, which may be incomplete or erroneous. Of particular concern is the limited number of high-quality intervention studies of whole diets, which include disease endpoints as the primary outcome. The aims of this review are to highlight the limitations of dietary guidelines based on nutrient-specific data, and the persuasive evidence for the benefits of whole dietary patterns on CVD risk. It also makes a plea for more randomised controlled trials, which are designed to support food and whole dietary-based approaches for preventing CVD.
Resumo:
Tremendous progress in plant proteomics driven by mass spectrometry (MS) techniques has been made since 2000 when few proteomics reports were published and plant proteomics was in its infancy. These achievements include the refinement of existing techniques and the search for new techniques to address food security, safety, and health issues. It is projected that in 2050, the world’s population will reach 9–12 billion people demanding a food production increase of 34–70% (FAO, 2009) from today’s food production. Provision of food in a sustainable and environmentally committed manner for such a demand without threatening natural resources, requires that agricultural production increases significantly and that postharvest handling and food manufacturing systems become more efficient requiring lower energy expenditure, a decrease in postharvest losses, less waste generation and food with longer shelf life. There is also a need to look for alternative protein sources to animal based (i.e., plant based) to be able to fulfill the increase in protein demands by 2050. Thus, plant biology has a critical role to play as a science capable of addressing such challenges. In this review, we discuss proteomics especially MS, as a platform, being utilized in plant biology research for the past 10 years having the potential to expedite the process of understanding plant biology for human benefits. The increasing application of proteomics technologies in food security, analysis, and safety is emphasized in this review. But, we are aware that no unique approach/technology is capable to address the global food issues. Proteomics-generated information/resources must be integrated and correlated with other omics-based approaches, information, and conventional programs to ensure sufficient food and resources for human development now and in the future.
Resumo:
Urbanization is one of the major forms of habitat alteration occurring at the present time. Although this is typically deleterious to biodiversity, some species flourish within these human-modified landscapes, potentially leading to negative and/or positive interactions between people and wildlife. Hence, up-to-date assessment of urban wildlife populations is important for developing appropriate management strategies. Surveying urban wildlife is limited by land partition and private ownership, rendering many common survey techniques difficult. Garnering public involvement is one solution, but this method is constrained by the inherent biases of non-standardised survey effort associated with voluntary participation. We used a television-led media approach to solicit national participation in an online sightings survey to investigate changes in the distribution of urban foxes in Great Britain and to explore relationships between urban features and fox occurrence and sightings density. Our results show that media-based approaches can generate a large national database on the current distribution of a recognisable species. Fox distribution in England and Wales has changed markedly within the last 25 years, with sightings submitted from 91% of urban areas previously predicted to support few or no foxes. Data were highly skewed with 90% of urban areas having <30 fox sightings per 1000 people km-2. The extent of total urban area was the only variable with a significant impact on both fox occurrence and sightings density in urban areas; longitude and percentage of public green urban space were respectively, significantly positively and negatively associated with sightings density only. Latitude, and distance to nearest neighbouring conurbation had no impact on either occurrence or sightings density. Given the limitations associated with this method, further investigations are needed to determine the association between sightings density and actual fox density, and variability of fox density within and between urban areas in Britain.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
Background Arboviruses have overlapping geographical distributions and can cause symptoms that coincide with more common infections. Therefore, arbovirus infections are often neglected by travel diagnostics. Here, we assessed the potential of syndrome-based approaches for diagnosis and surveillance of neglected arboviral diseases in returning travelers. Method To map the patients high at risk of missed clinical arboviral infections we compared the quantity of all arboviral diagnostic requests by physicians in the Netherlands, from 2009 through 2013, with a literature-based assessment of the travelers’ likely exposure to an arbovirus. Results 2153 patients, with travel and clinical history were evaluated. The diagnostic assay for dengue virus (DENV) was the most commonly requested (86%). Of travelers returning from Southeast Asia with symptoms compatible with chikungunya virus (CHIKV), only 55% were tested. For travelers in Europe, arbovirus diagnostics were rarely requested. Over all, diagnostics for most arboviruses were requested only on severe clinical presentation. Conclusion Travel destination and syndrome were used inconsistently for triage of diagnostics, likely resulting in vast under-diagnosis of arboviral infections of public health significance. This study shows the need for more awareness among physicians and standardization of syndromic diagnostic algorithms