902 resultados para Data analysis system
Resumo:
The relationship between lameness and feeding behaviour in dairy cows is not yet fully understood. This study examined the effect of lameness on feeding behaviour at two points during lactation. Forty-five Holstein–Friesian dairy cows (average parity 3.3) were housed in cubicle accommodation after calving and fed a total mixed ration (TMR). At approximately 60 and 120 days post partum, 48 h of information on feeding behaviour (including number of meals eaten, meal duration, meal size and feeding rate) was collected for each animal using feed boxes fitted to a data recording system. At the same time points, locomotion scores were recorded for each cow as a measure of lameness (1.0-sound to 4.5-severely lame). Relationships between feeding behaviour and locomotion score were analysed using Residual Maximum Likelihood (REML) analysis. At both time points, cows with higher locomotion scores ate fewer (P < 0.001), larger meals (P < 0.001) and had a shorter total feeding time (P < 0.001). At day 60 post partum, an increase in locomotion score was associated with a decrease in dry matter intake (P < 0.05), but at day 120 post partum no relationship was found between locomotion score and DMI. No relationship was found at either time point between locomotion score and mean meal duration or rate of feeding. The results of this study suggest that the effect of lameness on feeding behaviour in dairy cows does not remain constant across lactation.
Resumo:
Parkinson's disease (PD) is a chronic, progressive, degenerative disorder of the nervous system, causing substantial morbidity and has the capacity to shorten life. People with PD and their families can find the disease devastating. Nevertheless, this population of patients is not usually considered a group to be supported by palliative care specialists. But the nature of the illness and the challenges of managing its many physical and psychological effects raises questions about the potential benefits of a palliative care approach. The purpose of this project was to describe the experience of PD and consider the relevance of palliative care for this population. Semi-structured interviews were conducted with eight people with PD, 21 family caregivers and six health professionals. Five themes were developed from the data analysis: (1) emotional impact of diagnosis; (2) staying connected; (3) enduring financial hardship; (4) managing physical challenges; and (5) finding help for advanced stages. These data revealed that people with PD and family caregivers are confronted with similar issues to people with typical palliative care diagnoses, such as advanced cancer, and that a palliative approach may be helpful in the care of people with PD and their families.
Resumo:
The commonly used British Standard constant head triaxial permeability test for testing of fine-grained soils is relatively time consuming. A reduction in the required time for soil permeability testing would provide potential cost savings to the construction industry, particularly in the construction quality assurance of landfill clay liners. The purpose of this paper is to evaluate an alternative approach of measuring permeability of fine-grained soils benefiting from accelerated time scaling for seepage flow when testing specimens in elevated gravity conditions provided by a centrifuge. As part of the investigation, an apparatus was designed and produced to measure water flow through soil samples under conditions of elevated gravitational acceleration using a small desktop laboratory centrifuge. A membrane was used to hydrostatically confine the test sample. A miniature data acquisition system was designed and incorporated in the apparatus to monitor and record changes in head and flow throughout the tests. Under enhanced gravity in the centrifuge, the flow through the sample was under ‘variable head' conditions as opposed to ‘constant head' conditions as in the classic constant head permeability tests conducted at 1 g . A mathematical model was developed for analysis of Darcy's coefficient of permeability under conditions of elevated gravitational acceleration and verified using the results obtained. The test data compare well with the results on analogous samples obtained using the classical British Standard constant head permeability tests.
Resumo:
AIMS: To assess quantitatively variations in the extent of capillary basement membrane (BM) thickening between different retinal layers and within arterial and venous environments during diabetes. METHODS: One year after induction of experimental (streptozotocin) diabetes in rats, six diabetic animals together with six age-matched control animals were sacrificed and the retinas fixed for transmission electron microscopy (TEM). Blocks of retina straddling the major arteries and veins in the central retinal were dissected out, embedded in resin, and sectioned. Capillaries in close proximity to arteries or veins were designated as residing in either an arterial (AE) or a venous (VE) environment respectively, and the retinal layer in which each capillary was located was also noted. The thickness of the BM was then measured on an image analyser based two dimensional morphometric analysis system. RESULTS: In both diabetics and controls the AE capillaries had consistently thicker BMs than the VE capillaries. The BMs of both AE and VE capillaries from diabetics were thicker than those of capillaries in the corresponding retinal layer from the normal rats (p <or = 0.005). Also, in normal AE and VE capillaries and diabetic AE capillaries the BM in the nerve fibre layer (NFL) was thicker than that in either the inner (IPL) or outer (OPL) plexiform layers (p <or = 0.001). However, in diabetic VE capillaries the BMs of capillaries in the NFL were thicker than those of capillaries in the IPL (p <or = 0.05) which, in turn, had thicker BMs than capillaries in the OPL (p <or = 0.005). CONCLUSIONS: The variation in the extent of capillary BM thickening between different retinal layers within AE and VE environments may be related to differences in levels of oxygen tension and oxidative stress in the retina around arteries compared with that around veins.
Resumo:
Corneal endothelial cells from normal and traumatized human, primate, cat and rabbit eyes were studied by specular microscopy. Morphometric analysis was performed on micrographs of corneal endothelium using a semi-automated image analysis system. The results showed that under normal conditions the corneal endothelium of all four species exhibit major morphological similarities (mean cell areas: human 317 +/- 32 microns 2, primate 246 +/- 22 microns2, cat 357 +/- 25 microns 2, rabbit 308 +/- 35 microns 2). The normal corneal endothelium in man was found to be more polymegethous than that of the other species. Trauma to cat, primate and human corneas resulted in a long-term reduction in endothelial cell density and enhanced polymegethism. In contrast, the reparative response of the rabbit ensured the reformation of an essentially normal monolayer following injury. Endothelial giant cells were a normal inclusion in the rabbit corneal endothelium but were only significant in cat, primate and man following trauma. The presence of corneal endothelial giant cells in amitotic corneas may therefore represent a compensatory response in the absence of mitotic potential.
Resumo:
In the UK, end-of-life care strategies recommend patients and families are involved in decision making around treatment and care. In Bolivia, such strategies do not exist, and access to oncology services depends on finance, geography, education and culture. Compared to more developed countries, the delivery of oncology services in Latin America may result in a higher percentage of patients presenting with advanced incurable disease. The objective of this study was to explore decision-making experiences of health and social care professionals who cared for oncology and palliative care patients attending the Instituto Oncológico Nacional, Cochabamba (Bolivia). Patients were predominantly from the Quechua tradition, which has its own ethnic diversity, linguistic distinctions and economic systems. Qualitative data were collected during focus groups. Data analysis was conducted using Interpretative Phenomenological Analysis. Three interrelated themes emerged: (i) making sense of structures of experience and relationality; (ii) frustration with the system; and (iii) the challenges of promoting shared decision making. The study uncovered participants' lived experiences, emotions and perceptions of providing care for Quechua patients. There was evidence of structural inequalities, the marginalisation of Quechua patients and areas of concern that social workers might well be equipped to respond to, such as accessing finances for treatment/care, education and alleviating psychological or spiritual suffering.
Resumo:
The temperature at which densification ends for a range of blends comprising a metallocene catalysed medium density polyethylene (PE) in two different physical forms (powder and micropellets) were investigated using a novel data acquisition system (TP Picture®), developed by Total Petrochemicals [1]. The various blends were subsequently rotomoulded and test specimens prepared for mechanical analysis to establish the relationship between densification rate and bubble size / distribution on the part properties. The micropellets exhibited more rapid bubble removal times than powder.
Resumo:
Recent technological advances have increased the quantity of movement data being recorded. While valuable knowledge can be gained by analysing such data, its sheer volume creates challenges. Geovisual analytics, which helps the human cognition process by using tools to reason about data, offers powerful techniques to resolve these challenges. This paper introduces such a geovisual analytics environment for exploring movement trajectories, which provides visualisation interfaces, based on the classic space-time cube. Additionally, a new approach, using the mathematical description of motion within a space-time cube, is used to determine the similarity of trajectories and forms the basis for clustering them. These techniques were used to analyse pedestrian movement. The results reveal interesting and useful spatiotemporal patterns and clusters of pedestrians exhibiting similar behaviour.
Resumo:
Today there is a growing interest in the integration of health monitoring applications in portable devices necessitating the development of methods that improve the energy efficiency of such systems. In this paper, we present a systematic approach that enables energy-quality trade-offs in spectral analysis systems for bio-signals, which are useful in monitoring various health conditions as those associated with the heart-rate. To enable such trade-offs, the processed signals are expressed initially in a basis in which significant components that carry most of the relevant information can be easily distinguished from the parts that influence the output to a lesser extent. Such a classification allows the pruning of operations associated with the less significant signal components leading to power savings with minor quality loss since only less useful parts are pruned under the given requirements. To exploit the attributes of the modified spectral analysis system, thresholding rules are determined and adopted at design- and run-time, allowing the static or dynamic pruning of less-useful operations based on the accuracy and energy requirements. The proposed algorithm is implemented on a typical sensor node simulator and results show up-to 82% energy savings when static pruning is combined with voltage and frequency scaling, compared to the conventional algorithm in which such trade-offs were not available. In addition, experiments with numerous cardiac samples of various patients show that such energy savings come with a 4.9% average accuracy loss, which does not affect the system detection capability of sinus-arrhythmia which was used as a test case.
Resumo:
Research over the past two decades on the Holocene sediments from the tide dominated west side of the lower Ganges delta has focussed on constraining the sedimentary environment through grain size distributions (GSD). GSD has traditionally been assessed through the use of probability density function (PDF) methods (e.g. log-normal, log skew-Laplace functions), but these approaches do not acknowledge the compositional nature of the data, which may compromise outcomes in lithofacies interpretations. The use of PDF approaches in GSD analysis poses a series of challenges for the development of lithofacies models, such as equifinal distribution coefficients and obscuring the empirical data variability. In this study a methodological framework for characterising GSD is presented through compositional data analysis (CODA) plus a multivariate statistical framework. This provides a statistically robust analysis of the fine tidal estuary sediments from the West Bengal Sundarbans, relative to alternative PDF approaches.
Resumo:
Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.
Resumo:
AIMS: To assess quantitatively variations in the extent of capillary basement membrane (BM) thickening between different retinal layers and within arterial and venous environments during diabetes.
METHODS: One year after induction of experimental (streptozotocin) diabetes in rats, six diabetic animals together with six age-matched control animals were sacrificed and the retinas fixed for transmission electron microscopy (TEM). Blocks of retina straddling the major arteries and veins in the central retinal were dissected out, embedded in resin, and sectioned. Capillaries in close proximity to arteries or veins were designated as residing in either an arterial (AE) or a venous (VE) environment respectively, and the retinal layer in which each capillary was located was also noted. The thickness of the BM was then measured on an image analyser based two dimensional morphometric analysis system.
RESULTS: In both diabetics and controls the AE capillaries had consistently thicker BMs than the VE capillaries. The BMs of both AE and VE capillaries from diabetics were thicker than those of capillaries in the corresponding retinal layer from the normal rats (p < or = 0.005). Also, in normal AE and VE capillaries and diabetic AE capillaries the BM in the nerve fibre layer (NFL) was thicker than that in either the inner (IPL) or outer (OPL) plexiform layers (p < or = 0.001). However, in diabetic VE capillaries the BMs of capillaries in the NFL were thicker than those of capillaries in the IPL (p < or = 0.05) which, in turn, had thicker BMs than capillaries in the OPL (p < or = 0.005).
CONCLUSIONS: The variation in the extent of capillary BM thickening between different retinal layers within AE and VE environments may be related to differences in levels of oxygen tension and oxidative stress in the retina around arteries compared with that around veins.
Resumo:
Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month.
Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1).
Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5<sup>m</sup> for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 Å between 3345-9995 Å. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Å) and imaging with broadband JHK<inf>s</inf> filters.
Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ∼15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK<inf>s</inf> imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this.
Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.
Resumo:
This paper outlines a forensic method for analysing the energy, environmental and comfort performance of a building. The method has been applied to a recently developed event space in an Irish public building, which was evaluated using on-site field studies, data analysis, building simulation and occupant surveying. The method allows for consideration of both the technological and anthropological aspects of the building in use and for the identification of unsustainable operational practice and emerging problems. The forensic analysis identified energy savings of up to 50%, enabling a more sustainable, lower-energy operational future for the building. The building forensic analysis method presented in this paper is now planned for use in other public and commercial buildings.
Resumo:
Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.