943 resultados para Data utility


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of an instrumented impact test set-up to evaluate the influence of water ingress on the impact response of a carbon–epoxy (C–E) laminated composite system containing discontinuous buffer strips (BS) has been examined. The data on the BS-free C–E sample in dry conditions are used as reference to compare with the data derived from those immersed in water. The work demonstrated the utility of an instrumented impact test set-up in characterising the response, first owing to the architectural difference due to introduction of buffer strips and then due to the presence of an additional phase in the form of water ingressed into the sample. The presence of water was found to enhance the energy absorption characteristics of the C–E system with BS insertions. It was also noticed that with an increasing number of BS layer insertions, the load–time plots displayed characteristic changes. The ductility indices (DI) were found to display a lower value for the water immersed samples compared to the dry ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a setting in which several operators offer downlink wireless data access services in a certain geographical region. Each operator deploys several base stations or access points, and registers some subscribers. In such a situation, if operators pool their infrastructure, and permit the possibility of subscribers being served by any of the cooperating operators, then there can be overall better user satisfaction, and increased operator revenue. We use coalitional game theory to investigate such resource pooling and cooperation between operators.We use utility functions to model user satisfaction, and show that the resulting coalitional game has the property that if all operators cooperate (i.e., form a grand coalition) then there is an operating point that maximizes the sum utility over the operators while providing the operators revenues such that no subset of operators has an incentive to break away from the coalition. We investigate whether such operating points can result in utility unfairness between users of the various operators. We also study other revenue sharing concepts, namely, the nucleolus and the Shapely value. Such investigations throw light on criteria for operators to accept or reject subscribers, based on the service level agreements proposed by them. We also investigate the situation in which only certain subsets of operators may be willing to cooperate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Olympic Coast National Marine Sanctuary (OCNMS) continues to invest significant resources into seafloor mapping activities along Washington’s outer coast (Intelmann and Cochrane 2006; Intelmann et al. 2006; Intelmann 2006). Results from these annual mapping efforts offer a snapshot of current ground conditions, help to guide research and management activities, and provide a baseline for assessing the impacts of various threats to important habitat. During the months of August 2004 and May and July 2005, we used side scan sonar to image several regions of the sea floor in the northern OCNMS, and the data were mosaicked at 1-meter pixel resolution. Video from a towed camera sled, bathymetry data, sedimentary samples and side scan sonar mapping were integrated to describe geological and biological aspects of habitat. Polygon features were created and attributed with a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999). For three small areas that were mapped with both side scan sonar and multibeam echosounder, we made a comparison of output from the classified images indicating little difference in results between the two methods. With these considerations, backscatter derived from multibeam bathymetry is currently a costefficient and safe method for seabed imaging in the shallow (<30 meters) rocky waters of OCNMS. The image quality is sufficient for classification purposes, the associated depths provide further descriptive value and risks to gear are minimized. In shallow waters (<30 meters) which do not have a high incidence of dangerous rock pinnacles, a towed multi-beam side scan sonar could provide a better option for obtaining seafloor imagery due to the high rate of acquisition speed and high image quality, however the high probability of losing or damaging such a costly system when deployed as a towed configuration in the extremely rugose nearshore zones within OCNMS is a financially risky proposition. The development of newer technologies such as intereferometric multibeam systems and bathymetric side scan systems could also provide great potential for mapping these nearshore rocky areas as they allow for high speed data acquisition, produce precisely geo-referenced side scan imagery to bathymetry, and do not experience the angular depth dependency associated with multibeam echosounders allowing larger range scales to be used in shallower water. As such, further investigation of these systems is needed to assess their efficiency and utility in these environments compared to traditional side scan sonar and multibeam bathymetry. (PDF contains 43 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotope (SI) values of carbon (δ13C) and nitrogen (δ15N) are useful for determining the trophic connectivity between species within an ecosystem, but interpretation of these data involves important assumptions about sources of intrapopulation variability. We compared intrapopulation variability in δ13C and δ15N for an estuarine omnivore, Spotted Seatrout (Cynoscion nebulosus), to test assumptions and assess the utility of SI analysis for delineation of the connectivity of this species with other species in estuarine food webs. Both δ13C and δ15N values showed patterns of enrichment in fish caught from coastal to offshore sites and as a function of fish size. Results for δ13C were consistent in liver and muscle tissue, but liver δ15N showed a negative bias when compared with muscle that increased with absolute δ15N value. Natural variability in both isotopes was 5–10 times higher than that observed in laboratory populations, indicating that environmentally driven intrapopulation variability is detectable particularly after individual bias is removed through sample pooling. These results corroborate the utility of SI analysis for examination of the position of Spotted Seatrout in an estuarine food web. On the basis of these results, we conclude that interpretation of SI data in fishes should account for measurable and ecologically relevant intrapopulation variability for each species and system on a case by case basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Marginal utility theory prescribes the relationship between the objective property of the magnitude of rewards and their subjective value. Despite its pervasive influence, however, there is remarkably little direct empirical evidence for such a theory of value, let alone of its neurobiological basis. We show that human preferences in an intertemporal choice task are best described by a model that integrates marginally diminishing utility with temporal discounting. Using functional magnetic resonance imaging, we show that activity in the dorsal striatum encodes both the marginal utility of rewards, over and above that which can be described by their magnitude alone, and the discounting associated with increasing time. In addition, our data show that dorsal striatum may be involved in integrating subjective valuation systems inherent to time and magnitude, thereby providing an overall metric of value used to guide choice behavior. Furthermore, during choice, we show that anterior cingulate activity correlates with the degree of difficulty associated with dissonance between value and time. Our data support an integrative architecture for decision making, revealing the neural representation of distinct subcomponents of value that may contribute to impulsivity and decisiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial normalisation is a key element of statistical parametric mapping and related techniques for analysing cohort statistics on voxel arrays and surfaces. The normalisation process involves aligning each individual specimen to a template using some sort of registration algorithm. Any misregistration will result in data being mapped onto the template at the wrong location. At best, this will introduce spatial imprecision into the subsequent statistical analysis. At worst, when the misregistration varies systematically with a covariate of interest, it may lead to false statistical inference. Since misregistration generally depends on the specimen's shape, we investigate here the effect of allowing for shape as a confound in the statistical analysis, with shape represented by the dominant modes of variation observed in the cohort. In a series of experiments on synthetic surface data, we demonstrate how allowing for shape can reveal true effects that were previously masked by systematic misregistration, and also guard against misinterpreting systematic misregistration as a true effect. We introduce some heuristics for disentangling misregistration effects from true effects, and demonstrate the approach's practical utility in a case study of the cortical bone distribution in 268 human femurs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complete internal transcribed spacer 1 (ITS1), 5.8S ribosomal DNA, and ITS2 region of the ribosomal DNA from 60 specimens belonging to two closely related bucephalid digeneans (Dollfustrema vaneyi and Dollfustrema hefeiensis) from different localities, hosts, and microhabitat sites were cloned to examine the level of sequence variation and the taxonomic levels to show utility in species identification and phylogeny estimation. Our data show that these molecular markers can help to discriminate the two species, which are morphologically very close and difficult to separate by classical methods. We found 21 haplotypes defined by 44 polymorphic positions in 38 individuals of D. vaneyi, and 16 haplotypes defined by 43 polymorphic positions in 22 individuals of D. hefeiensis. There is no shared haplotypes between the two species. Haplotype rather than nucleotide diversity is similar between the two species. Phylogenetic analyses reveal two robustly supported clades, one corresponding to D. vaneyi and the other corresponding to D. hefeiensis. However, the population structures between the two species seem to be incongruent and show no geographic and host-specific structure among them, further indicating that the two species may have had a more complex evolutionary history than expected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Risk assessment with a thorough family health history is recommended by numerous organizations and is now a required component of the annual physical for Medicare beneficiaries under the Affordable Care Act. However, there are several barriers to incorporating robust risk assessments into routine care. MeTree, a web-based patient-facing health risk assessment tool, was developed with the aim of overcoming these barriers. In order to better understand what factors will be instrumental for broader adoption of risk assessment programs like MeTree in clinical settings, we obtained funding to perform a type III hybrid implementation-effectiveness study in primary care clinics at five diverse healthcare systems. Here, we describe the study's protocol. METHODS/DESIGN: MeTree collects personal medical information and a three-generation family health history from patients on 98 conditions. Using algorithms built entirely from current clinical guidelines, it provides clinical decision support to providers and patients on 30 conditions. All adult patients with an upcoming well-visit appointment at one of the 20 intervention clinics are eligible to participate. Patient-oriented risk reports are provided in real time. Provider-oriented risk reports are uploaded to the electronic medical record for review at the time of the appointment. Implementation outcomes are enrollment rate of clinics, providers, and patients (enrolled vs approached) and their representativeness compared to the underlying population. Primary effectiveness outcomes are the percent of participants newly identified as being at increased risk for one of the clinical decision support conditions and the percent with appropriate risk-based screening. Secondary outcomes include percent change in those meeting goals for a healthy lifestyle (diet, exercise, and smoking). Outcomes are measured through electronic medical record data abstraction, patient surveys, and surveys/qualitative interviews of clinical staff. DISCUSSION: This study evaluates factors that are critical to successful implementation of a web-based risk assessment tool into routine clinical care in a variety of healthcare settings. The result will identify resource needs and potential barriers and solutions to implementation in each setting as well as an understanding potential effectiveness. TRIAL REGISTRATION: NCT01956773.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shade plots, simple visual representations of abundance matrices from multivariate species assemblage studies, are shown to be an effective aid in choosing an overall transformation (or other pre-treatment) of quantitative data for long-term use, striking an appropriate balance between dominant and less abundant taxa in ensuing resemblance-based multivariate analyses. Though the exposition is entirely general and applicable to all community studies, detailed illustrations of the comparative power and interpretative possibilities of shade plots are given in the case of two estuarine assemblage studies in south-western Australia: (a) macrobenthos in the upper Swan Estuary over a two-year period covering a highly significant precipitation event for the Perth area; and (b) a wide-scale spatial study of the nearshore fish fauna from five divergent estuaries. The utility of transformations of intermediate severity is again demonstrated and, with greater novelty, the potential importance seen of further mild transformation of all data after differential down-weighting (dispersion weighting) of spatially clumped' or schooled' species. Among the new techniques utilized is a two-way form of the RELATE test, which demonstrates linking of assemblage structure (fish) to continuous environmental variables (water quality), having removed a categorical factor (estuary differences). Re-orderings of sample and species axes in the associated shade plots are seen to provide transparent explanations at the species level for such continuous multivariate patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Microarray Innovations in Leukemia study assessed the clinical utility of gene expression profiling as a single test to subtype leukemias into conventional categories of myeloid and lymphoid malignancies. METHODS: The investigation was performed in 11 laboratories across three continents and included 3,334 patients. An exploratory retrospective stage I study was designed for biomarker discovery and generated whole-genome expression profiles from 2,143 patients with leukemias and myelodysplastic syndromes. The gene expression profiling-based diagnostic accuracy was further validated in a prospective second study stage of an independent cohort of 1,191 patients. RESULTS: On the basis of 2,096 samples, the stage I study achieved 92.2% classification accuracy for all 18 distinct classes investigated (median specificity of 99.7%). In a second cohort of 1,152 prospectively collected patients, a classification scheme reached 95.6% median sensitivity and 99.8% median specificity for 14 standard subtypes of acute leukemia (eight acute lymphoblastic leukemia and six acute myeloid leukemia classes, n = 693). In 29 (57%) of 51 discrepant cases, the microarray results had outperformed routine diagnostic methods. CONCLUSION: Gene expression profiling is a robust technology for the diagnosis of hematologic malignancies with high accuracy. It may complement current diagnostic algorithms and could offer a reliable platform for patients who lack access to today's state-of-the-art diagnostic work-up. Our comprehensive gene expression data set will be submitted to the public domain to foster research focusing on the molecular understanding of leukemias

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares the Random Regret Minimization and the Random Utility Maximization models for determining recreational choice. The Random Regret approach is based on the idea that, when choosing, individuals aim to minimize their regret – regret being defined as what one experiences when a non-chosen alternative in a choice set performs better than a chosen one in relation to one or more attributes. The Random Regret paradigm, recently developed in transport economics, presents a tractable, regret-based alternative to the dominant choice paradigm based on Random Utility. Using data from a travel cost study exploring factors that influence kayakers’ site-choice decisions in the Republic of Ireland, we estimate both the traditional Random Utility multinomial logit model (RU-MNL) and the Random Regret multinomial logit model (RR-MNL) to gain more insights into site choice decisions. We further explore whether choices are driven by a utility maximization or a regret minimization paradigm by running a binary logit model to examine the likelihood of the two decision choice paradigms using site visits and respondents characteristics as explanatory variables. In addition to being one of the first studies to apply the RR-MNL to an environmental good, this paper also represents the first application of the RR-MNL to compute the Logsum to test and strengthen conclusions on welfare impacts of potential alternative policy scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare two approaches for estimating the distribution of consumers' willingness to pay (WTP) in discrete choice models. The usual procedure is to estimate the distribution of the utility coefficients and then derive the distribution of WTP, which is the ratio of coefficients. The alternative is to estimate the distribution of WTP directly. We apply both approaches to data on site choice in the Alps. We find that the alternative approach fits the data better, reduces the incidence of exceedingly large estimated WTP values, and provides the analyst with greater control in specifying and testing the distribution of WTP. © 2008 Agricultural and Applied Economics Association.