924 resultados para Dairy cattle Breeding Australia Statistics Data processing
Resumo:
Switzerland had been affected by the bluetongue virus serotype 8 (BTV-8) epidemic in Europe in the years 2007 to 2009. After three years of mandatory vaccination and comprehensive surveillance, Switzerland showed to be free of BTV-8 in 2012. In the future Elisa testing of bulk-tank milk (BTM) samples as a very sensitive and cost-effective method should be used for the surveillance of all serotypes of BTV. To determine the prevalence of seropositive herds, BTM from 240 cattle herds was sampled in July 2012. The results showed an apparent seroprevalence of 98.7% in the investigated dairy herds. Most plausible, the high prevalence was caused by the vaccination campaigns rather than by infections with BTV-8. In the outbreak the cumulative number of BTV-8 cases in Switzerland had been 75.Thus it is very likely that the used inactivated vaccines induced long-term antibody titres. Due to the high seroprevalence, investigating for BT-antibodies cannot be used for early recognition of a new introduction of BTV at the moment. Nonetheless, testing of BTM samples is appropriate for an annual evaluation of the seroprevalence and especially as an instrument for early recognition for incursions as soon as the antibody prevalence declines.To determine this decline the BTM testing scheme should be conducted each year as described in this work.
Resumo:
Background: The therapy of retained fetal membranes (RFM) is a controversial subject. In Switzerland, intrauterine antibiotics are routinely administered although their effect on fertility parameters is questionable. The objective of this study was to compare the post-partal period after a routine treatment of RFM in 2 groups: one group received a placebo additionally (A), whereas the other group received a phytotherapeutic substance (lime bark) (B) additionally. The routine treatment of RFM included an attempt to manually remove the fetal membranes (for a maximum of 5 min), intramuscular administration of oxytetracycline and intrauterine treatment with tetracycline. In case of an elevated rectal temperature (>39.0°C), an additional non-steroidal inflam-matory drug was allowed. Methods: Cows undergoing caesarean section, suffering from prolapse of the uterus, deep cervical or vaginal injuries, hypocalcaemia, and illnesses during the last 14 days before calving were excluded. Cows had to be more than 265 days pregnant. Only cows that were artificially inseminated after RFM were included. Group stratification was done according to the last number on the ear tag (even/uneven) with (n = 50) cows in group A and (n = 55) cows in group B. Results: The number of treatments after the initial treatment of RFM was not significantly different between groups. The median interval from calving to the first insemination was 77 days in group A compared to 82 days in group B (p = 0.72). The number of AI’s until conception was not significantly different between groups. The median number of days open was 89 days in group A compared to 96 days in group B (p = 0.57). The culling rate was not significantly different between groups. Conclusion: There was neither a difference between the groups concerning therapies within the first 50 days after RFM nor concerning the subsequent fertility variables.
Resumo:
In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.
Resumo:
Due to the advancement of both, information technology in general, and databases in particular; data storage devices are becoming cheaper and data processing speed is increasing. As result of this, organizations tend to store large volumes of data holding great potential information. Decision Support Systems, DSS try to use the stored data to obtain valuable information for organizations. In this paper, we use both data models and use cases to represent the functionality of data processing in DSS following Software Engineering processes. We propose a methodology to develop DSS in the Analysis phase, respective of data processing modeling. We have used, as a starting point, a data model adapted to the semantics involved in multidimensional databases or data warehouses, DW. Also, we have taken an algorithm that provides us with all the possible ways to automatically cross check multidimensional model data. Using the aforementioned, we propose diagrams and descriptions of use cases, which can be considered as patterns representing the DSS functionality, in regard to DW data processing, DW on which DSS are based. We highlight the reusability and automation benefits that this can be achieved, and we think this study can serve as a guide in the development of DSS.
Resumo:
PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.
Resumo:
Nowadays, devices that monitor the health of structures consume a lot of power and need a lot of time to acquire, process, and send the information about the structure to the main processing unit. To decrease this time, fast electronic devices are starting to be used to accelerate this processing. In this paper some hardware algorithms implemented in an electronic logic programming device are described. The goal of this implementation is accelerate the process and diminish the information that has to be send. By reaching this goal, the time the processor needs for treating all the information is reduced and so the power consumption is reduced too.
Resumo:
We describe the use of singular value decomposition in transforming genome-wide expression data from genes × arrays space to reduced diagonalized “eigengenes” × “eigenarrays” space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively.
Resumo:
This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such.