924 resultados para Dairy cattle Breeding Australia Statistics Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to analyze the energy efficiency ratings and cost of farming families producing milk in the city of Pardinho (SP). The hypothesis that guides the study is that energy expenditure may be coincident with the economic expenditure showing that there is a relationship between these flows, which can be sustainable or not. To better define the producers studied criteria were used in the official system of rural credit FEAP. Through primary data obtained by the speeches, the itineraries were reconstituted technical detailing the operations employed. Thus, we found two producers with different technical routes. The producer was the one who got one of the highest efficiencies 8.66 and 1.48 respectively. Related to the efficiency, we can see that are close related and when there is a broader idea of the allocation of energy resources and, thus, a better view of the sustainability of the agro ecosystem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study aimed to analyze the energy and economic efficiency rate family farms producing milk in the county of Pardinho, Sao Paulo State. The criteria used to define producers in this study is that outlined by the Brazilian agricultural credit system FEAP (Fund Expansion of Agribusiness Paulista). Through primary data, obtained by verbal reports, the agroecosystem technical itineraries were re-established, detailing the process applied, machinery, implements, equipment, supplies and manual work. These were transformed into energy and economic units, which allowed determining the established connection between energy economics outputs and inputs. The hypothesis of this study is that the energetic expenditure may be coincidental with economic expenditures. The energetic and economic flows were analyzed, using a structure of expenditures, by type, source and form of gross energy, as well as the energetic point of view. Four producers were found to have different technical itineraries. Producers 1 and 2 achieved the highest energy and economic efficiency rates. The producer with the lowest efficiency rates was producer 4. The energy sources not renewable like chemical fertilizers were the most used reaching an average of 82.9% for the energy and 52.86% for the economic expenditures. When comparing energy and economic efficiency it is possible to verify that both forms of analysis are close, obtaining a broader idea about energy resources allocation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This investigation was made in 1929-1930 for the purpose of studying the activities of Nebraska farm women in the raising of poultry and in the care of dairy products, to discover whether or not such activities resulted in a contribution to the family income. With this in view, a group of women were asked to keep records for one year (from April 1, 1929 to March 31, 1930) of the value and amount of dairy and poultry products sold or used, of all expense incurred in production, and of the time spent both by the homemaker herself and by all other members of the household, in the production and sale of dairy and poultry products. When this study was outlined it was intended to cover only actual cash addition to the family income. This, however, did not prove to be feasible, as a considerable portion of the contribution to the family income was in the form of dairy and poultry products used at home.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Livestock face complex foraging options associated with optimizing nutrient intake while being able to avoid areas posing risk of parasites or disease. Areas of tall nutrient-rich swards around fecal deposits may be attractive for grazing, but might incur fitness costs from parasites. We use the example of dairy cattle and the risks of tuberculosis transmission posed to them by pastures contaminated with badger excreta to examine this trade-off. A risk may be posed either by aerosolized inhalation through investigation or by ingestion via grazing contaminated swards. We quantified the levels of investigation and grazing of 150 dairy cows at badger latrines (accumulations of feces and urine) and crossing points (urination-only sites). Grazing behavior was compared between strip-grazed and rotation-grazed fields. Strip grazing had fields subdivided for grazing periods of <24 h, whereas rotational grazing involved access to whole fields for 1 to 7 d each. A higher proportion of the herd investigated badger latrines than crossing points or controls. Cattle initially avoided swards around badger latrines but not around crossing points. Avoidance periods were shorter in strip- compared with rotation-grazing systems. In rotation-grazing management, latrines were avoided for longer times, but there were more investigative contacts than with strip-grazing management. If investigation is a major route of tuberculosis transmission, the risk to cattle is greatest in extensive rotation-grazing systems. However, if ingestion of fresh urine is the primary method of transmission, strip-grazing management may pose a greater threat. Farming systems affect the level and type of contact between livestock and wildlife excreta and thus the risks of disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The demand for "welfare friendly" products increases as public conscience and perception on livestock production systems grow. The public and policy-makers demand scientific information for education and to guide decision processes. This paper describes some of the last decade contributions made by scientists on the technical, economical and market areas of farm animal welfare. Articles on animal welfare were compiled on the following themes: 1) consumer behavior, 2) technical and economical viability, 3) public regulation, and 4) private certification policies. Most studies on the economic evaluation of systems that promote animal welfare involved species destined to produce export items, such as eggs, beef and pork. Few studies were found on broilers, dairy cows and fish, and data regarding other species, such as horses, sheep and goats were not found. Scientists understand that farm animal welfare is not only a matter of ethics, but also an essential tool to gain and maintain markets. However, it is unfortunate that little attention is paid to species that are not economically important for exports. Studies that emphasize on more humane ways to raise animals and that provide economic incentives to the producer are needed. An integrated multidisciplinary approach is necessary to highlight the benefits of introducing animal welfare techniques to existing production systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The beta-Birnbaum-Saunders (Cordeiro and Lemonte, 2011) and Birnbaum-Saunders (Birnbaum and Saunders, 1969a) distributions have been used quite effectively to model failure times for materials subject to fatigue and lifetime data. We define the log-beta-Birnbaum-Saunders distribution by the logarithm of the beta-Birnbaum-Saunders distribution. Explicit expressions for its generating function and moments are derived. We propose a new log-beta-Birnbaum-Saunders regression model that can be applied to censored data and be used more effectively in survival analysis. We obtain the maximum likelihood estimates of the model parameters for censored data and investigate influence diagnostics. The new location-scale regression model is modified for the possibility that long-term survivors may be presented in the data. Its usefulness is illustrated by means of two real data sets. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the developmental anomalies observed in cloned animals are related to foetal and placental overgrowth, a phenomenon known as the 'large offspring syndrome' (LOS) in ruminants. It has been hypothesized that the epigenetic control of imprinted genes, that is, genes that are expressed in a parental-specific manner, is at the root of LOS. Our recent research has focused on understanding epigenetic alterations to imprinted genes that are associated with assisted reproductive technologies (ART), such as early embryo in vitro culture (IVC) and somatic cell nuclear transfer (SCNT) in cattle. We have sought and identified single nucleotide polymorphisms in Bos indicus DNA useful for the analysis of parental-specific alleles and their respective transcripts in tissues from hybrid embryos derived by crossing Bos indicus and Bos taurus cattle. By analysing differentially methylated regions (DMRs) of imprinted genes SNRPN, H19 and the IGF2R in cattle, we demonstrated that there is a generalized hypomethylation of the imprinted allele and the biallelic expression of embryos produced by SCNT when compared to the methylation patterns observed in vivo (artificially inseminated). Together, these results indicate that imprinting marks are erased during the reprogramming of the somatic cell nucleus during early development, indicating that such epigenetic anomalies may play a key role in mortality and morbidity of cloned animals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical, epidemiological, and pathological aspects of trypanosomiasis caused by Trypanosoma vivax in calves were reported for the first time in northeast Brazil. Clinical and epidemiological data, packed cell volumes (PCV), and parasitemia were assessed in 150 calves in May 2009 (rainy season-survey 1) and in 153 calves in November 2009 (dry season-survey 2) in three farms (A, B, and C). Prevalence of T. vivax in calves examined in the survey 1 was 63.3%, 65.0%, and 80.0% in farms A, B, and C, respectively. Morbidity varied from 63.3% to 80%, mortality from 15% to 30% and lethality from 23% to 37.5%. In survey 1, for all farms, high parasitemia (from 30.3 to 26.2x10(6) parasites/mL), fever (from 39.8 to 40.3 degrees C), low PCV (from 15.7% to 18.1%), and body score (from 2.5 to 3.5) were detected. Calves showed depression, weight loss, pale mucous membranes, enlarged lymph nodes, edema of the dewlap, cough, coryza, and diarrhea. The animals from farms A and B were treated with diminazene aceturate. Six months after, in survey 2, non-treated calves from farm C showed values for prevalence (81.82), morbidity (81.82), mortality (12.73), and lethality (15.55) similar to those in survey 1 (P>0.05). Also in survey 2, four calves aging merely 1-3 days old presented high parasitemia levels (from 32x10(6) to 74x10(6) parasites/mL), suggesting transplacental transmission. In conclusion, trypanosomiasis by T. vivax constitutes high prevalent disease for calves raised in Brazilian semiarid and may have transplacental transmission.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background In tropical countries, losses caused by bovine tick Rhipicephalus (Boophilus) microplus infestation have a tremendous economic impact on cattle production systems. Genetic variation between Bos taurus and Bos indicus to tick resistance and molecular biology tools might allow for the identification of molecular markers linked to resistance traits that could be used as an auxiliary tool in selection programs. The objective of this work was to identify QTL associated with tick resistance/susceptibility in a bovine F2 population derived from the Gyr (Bos indicus) × Holstein (Bos taurus) cross. Results Through a whole genome scan with microsatellite markers, we were able to map six genomic regions associated with bovine tick resistance. For most QTL, we have found that depending on the tick evaluation season (dry and rainy) different sets of genes could be involved in the resistance mechanism. We identified dry season specific QTL on BTA 2 and 10, rainy season specific QTL on BTA 5, 11 and 27. We also found a highly significant genome wide QTL for both dry and rainy seasons in the central region of BTA 23. Conclusions The experimental F2 population derived from Gyr × Holstein cross successfully allowed the identification of six highly significant QTL associated with tick resistance in cattle. QTL located on BTA 23 might be related with the bovine histocompatibility complex. Further investigation of these QTL will help to isolate candidate genes involved with tick resistance in cattle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in biomedical signal acquisition systems for motion analysis have led to lowcost and ubiquitous wearable sensors which can be used to record movement data in different settings. This implies the potential availability of large amounts of quantitative data. It is then crucial to identify and to extract the information of clinical relevance from the large amount of available data. This quantitative and objective information can be an important aid for clinical decision making. Data mining is the process of discovering such information in databases through data processing, selection of informative data, and identification of relevant patterns. The databases considered in this thesis store motion data from wearable sensors (specifically accelerometers) and clinical information (clinical data, scores, tests). The main goal of this thesis is to develop data mining tools which can provide quantitative information to the clinician in the field of movement disorders. This thesis will focus on motor impairment in Parkinson's disease (PD). Different databases related to Parkinson subjects in different stages of the disease were considered for this thesis. Each database is characterized by the data recorded during a specific motor task performed by different groups of subjects. The data mining techniques that were used in this thesis are feature selection (a technique which was used to find relevant information and to discard useless or redundant data), classification, clustering, and regression. The aims were to identify high risk subjects for PD, characterize the differences between early PD subjects and healthy ones, characterize PD subtypes and automatically assess the severity of symptoms in the home setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.