935 resultados para data gathering
Resumo:
Economic surveys of fisheries are undertaken in several countries as a means of assessing the economic performance of their fisheries. The level of economic profits accruing in the fishery can be estimated from the average economic profits of the boats surveyed. Economic profits consist of two components—resource rent and intra-marginal rent. From a fisheries management perspective, the key indicator of performance is the level of resource rent being generated in the fishery. Consequently, these different components need to be separated out. In this paper, a means of separating out the rent components is identified for a heterogeneous fishery. This is applied to the multi-purpose fleet operating in the English Channel. The paper demonstrates that failing to separate out these two components may result in a misrepresentation of the economic performance of the fishery.
Resumo:
This paper proposes a simulation-based density estimation technique for time series that exploits information found in covariate data. The method can be paired with a large range of parametric models used in time series estimation. We derive asymptotic properties of the estimator and illustrate attractive finite sample properties for a range of well-known econometric and financial applications.
Resumo:
Assessment for Learning practices with students such as feedback, and self- and peer assessment are opportunities for teachers and students to develop a shared understanding of how to create quality learning performances. Quality is often represented through achievement standards. This paper explores how primary school teachers in Australia used the process of annotating work samples to develop shared understanding of achievement standards during their curriculum planning phase, and how this understanding informed their teaching so that their students also developed this understanding. Bernstein's concept of the pedagogic device is used to identify the ways teachers recontextualised their assessment knowledge into their pedagogic practices. Two researchers worked alongside seven primary school teachers in two schools over a year, gathering qualitative data through focus groups and interviews. Three general recontextualising approaches were identified in the case studies; recontextualising standards by reinterpreting the role of rubrics, recontextualising by replicating the annotation process with the students and recontextualising by reinterpreting practices with students. While each approach had strengths and limitations, all of the teachers concluded that annotating conversations in the planning phase enhanced their understanding, and informed their practices in helping students to understand expectations for quality.
Resumo:
Although the collection of player and ball tracking data is fast becoming the norm in professional sports, large-scale mining of such spatiotemporal data has yet to surface. In this paper, given an entire season's worth of player and ball tracking data from a professional soccer league (approx 400,000,000 data points), we present a method which can conduct both individual player and team analysis. Due to the dynamic, continuous and multi-player nature of team sports like soccer, a major issue is aligning player positions over time. We present a "role-based" representation that dynamically updates each player's relative role at each frame and demonstrate how this captures the short-term context to enable both individual player and team analysis. We discover role directly from data by utilizing a minimum entropy data partitioning method and show how this can be used to accurately detect and visualize formations, as well as analyze individual player behavior.
Resumo:
To the trained-eye, experts can often identify a team based on their unique style of play due to their movement, passing and interactions. In this paper, we present a method which can accurately determine the identity of a team from spatiotemporal player tracking data. We do this by utilizing a formation descriptor which is found by minimizing the entropy of role-specific occupancy maps. We show how our approach is significantly better at identifying different teams compared to standard measures (i.e., shots, passes etc.). We demonstrate the utility of our approach using an entire season of Prozone player tracking data from a top-tier professional soccer league.
Resumo:
The majority of stem cell therapies for corneal repair are based upon the use of progenitor cells isolated from corneal tissue, but a growing body of literature suggests a role for mesenchymal stromal cells (MSC) isolated from non-corneal tissues. While the mechanism of MSC action seems likely to involve their immuno-modulatory properties, claims have emerged of MSC transdifferentiation into corneal cells. Substantial differences in methodology and experimental outcomes, however, have prompted us to perform a systematic review of the published data. Key questions used in our analysis included; the choice of markers used to assess corneal cell phenotype, the techniques employed to detect these markers, adequate reporting of controls, and tracking of MSC when studied in vivo. Our search of the literature revealed 28 papers published since 2006, with half appearing since 2012. MSC cultures established from bone marrow and adipose tissue have been best studied (22 papers). Critically, only 11 studies employed appropriate markers of corneal cell phenotype, along with necessary controls. Ten out of these 11 papers, however, contained positive evidence of corneal cell marker expression by MSC. The clearest evidence is observed with respect to expression of markers for corneal stromal cells by MSC. In comparison, the evidence for MSC conversion into either corneal epithelial cells or corneal endothelial cells is often inconsistent or inconclusive. Our analysis clarifies this emerging body of literature and provides guidance for future studies of MSC differentiation within the cornea as well as other tissues.
Resumo:
Health Information Exchange (HIE) is an interesting phenomenon. It is a patient centric health and/or medical information management scenario enhanced by integration of Information and Communication Technologies (ICT). While health information systems are repositioning complex system directives, in the wake of the ‘big data’ paradigm, extracting quality information is challenging. It is anticipated that in this talk, ICT enabled healthcare scenarios with big data analytics will be shared. In addition, research and development regarding big data analytics, such as current trends of using these technologies for health care services and critical research challenges when extracting quality of information to improve quality of life will be discussed.
Resumo:
Governments around the world want to know a lot about who we are and what we’re doing online and they want communications companies to help them find it. We don’t know a lot about when companies hand over this data, but we do know that it’s becoming increasingly common.
Resumo:
Double-pulse tests are commonly used as a method for assessing the switching performance of power semiconductor switches in a clamped inductive switching application. Data generated from these tests are typically in the form of sampled waveform data captured using an oscilloscope. In cases where it is of interest to explore a multi-dimensional parameter space and corresponding result space it is necessary to reduce the data into key performance metrics via feature extraction. This paper presents techniques for the extraction of switching performance metrics from sampled double-pulse waveform data. The reported techniques are applied to experimental data from characterisation of a cascode gate drive circuit applied to power MOSFETs.
Resumo:
Background Australian national biomonitoring for persistent organic pollutants (POPs) relies upon age-specific pooled serum samples to characterize central tendencies of concentrations but does not provide estimates of upper bound concentrations. This analysis compares population variation from biomonitoring datasets from the US, Canada, Germany, Spain, and Belgium to identify and test patterns potentially useful for estimating population upper bound reference values for the Australian population. Methods Arithmetic means and the ratio of the 95th percentile to the arithmetic mean (P95:mean) were assessed by survey for defined age subgroups for three polychlorinated biphenyls (PCBs 138, 153, and 180), hexachlorobenzene (HCB), p,p-dichlorodiphenyldichloroethylene (DDE), 2,2′,4,4′ tetrabrominated diphenylether (PBDE 47), perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS). Results Arithmetic mean concentrations of each analyte varied widely across surveys and age groups. However, P95:mean ratios differed to a limited extent, with no systematic variation across ages. The average P95:mean ratios were 2.2 for the three PCBs and HCB; 3.0 for DDE; 2.0 and 2.3 for PFOA and PFOS, respectively. The P95:mean ratio for PBDE 47 was more variable among age groups, ranging from 2.7 to 4.8. The average P95:mean ratios accurately estimated age group-specific P95s in the Flemish Environmental Health Survey II and were used to estimate the P95s for the Australian population by age group from the pooled biomonitoring data. Conclusions Similar population variation patterns for POPs were observed across multiple surveys, even when absolute concentrations differed widely. These patterns can be used to estimate population upper bounds when only pooled sampling data are available.
Resumo:
In the past few years, there has been a steady increase in the attention, importance and focus of green initiatives related to data centers. While various energy aware measures have been developed for data centers, the requirement of improving the performance efficiency of application assignment at the same time has yet to be fulfilled. For instance, many energy aware measures applied to data centers maintain a trade-off between energy consumption and Quality of Service (QoS). To address this problem, this paper presents a novel concept of profiling to facilitate offline optimization for a deterministic application assignment to virtual machines. Then, a profile-based model is established for obtaining near-optimal allocations of applications to virtual machines with consideration of three major objectives: energy cost, CPU utilization efficiency and application completion time. From this model, a profile-based and scalable matching algorithm is developed to solve the profile-based model. The assignment efficiency of our algorithm is then compared with that of the Hungarian algorithm, which does not scale well though giving the optimal solution.
Resumo:
This research is a step forward in improving the accuracy of detecting anomaly in a data graph representing connectivity between people in an online social network. The proposed hybrid methods are based on fuzzy machine learning techniques utilising different types of structural input features. The methods are presented within a multi-layered framework which provides the full requirements needed for finding anomalies in data graphs generated from online social networks, including data modelling and analysis, labelling, and evaluation.
Resumo:
In this paper, we summarize our recent work in analyz- ing and predicting behaviors in sports using spatiotemporal data. We specifically focus on two recent works: 1) Predicting the location of shot in tennis using Hawk-Eye tennis data, and 2) Clustering spatiotemporal plays in soccer to discover the methods in which they get a shot on goal from a professional league.
Resumo:
This paper presents a single pass algorithm for mining discriminative Itemsets in data streams using a novel data structure and the tilted-time window model. Discriminative Itemsets are defined as Itemsets that are frequent in one data stream and their frequency in that stream is much higher than the rest of the streams in the dataset. In order to deal with the data structure size, we propose a pruning process that results in the compact tree structure containing discriminative Itemsets. Empirical analysis shows the sound time and space complexity of the proposed method.