89 resultados para distributional weights
Resumo:
This report presents the results of a comprehensive survey of UK university spin-out businesses.
In an effort to enhance our understanding of this sector, a database of 1044 active USOs was compiled from individual university records and internet searches, and matched to a published list of UK university spin-outs.Telephone interviews were conducted with USOs and a final sample of 350 was achieved. Non-response bias was tested for and weights were constructed to ensure that the findings were representative of the UK population of USOs.
Resumo:
With the proliferation of geo-positioning and geo-tagging techniques, spatio-textual objects that possess both a geographical location and a textual description are gaining in prevalence, and spatial keyword queries that exploit both location and textual description are gaining in prominence. However, the queries studied so far generally focus on finding individual objects that each satisfy a query rather than finding groups of objects where the objects in a group together satisfy a query.
We define the problem of retrieving a group of spatio-textual objects such that the group's keywords cover the query's keywords and such that the objects are nearest to the query location and have the smallest inter-object distances. Specifically, we study three instantiations of this problem, all of which are NP-hard. We devise exact solutions as well as approximate solutions with provable approximation bounds to the problems. In addition, we solve the problems of retrieving top-k groups of three instantiations, and study a weighted version of the problem that incorporates object weights. We present empirical studies that offer insight into the efficiency of the solutions, as well as the accuracy of the approximate solutions.
Resumo:
The preferences of users are important in route search and planning. For example, when a user plans a trip within a city, their preferences can be expressed as keywords shopping mall, restaurant, and museum, with weights 0.5, 0.4, and 0.1, respectively. The resulting route should best satisfy their weighted preferences. In this paper, we take into account the weighted user preferences in route search, and present a keyword coverage problem, which finds an optimal route from a source location to a target location such that the keyword coverage is optimized and that the budget score satisfies a specified constraint. We prove that this problem is NP-hard. To solve this complex problem, we pro- pose an optimal route search based on an A* variant for which we have defined an admissible heuristic function. The experiments conducted on real-world datasets demonstrate both the efficiency and accu- racy of our proposed algorithms.
Resumo:
BACKGROUND: Worldwide data for cancer survival are scarce. We aimed to initiate worldwide surveillance of cancer survival by central analysis of population-based registry data, as a metric of the effectiveness of health systems, and to inform global policy on cancer control.
METHODS: Individual tumour records were submitted by 279 population-based cancer registries in 67 countries for 25·7 million adults (age 15-99 years) and 75,000 children (age 0-14 years) diagnosed with cancer during 1995-2009 and followed up to Dec 31, 2009, or later. We looked at cancers of the stomach, colon, rectum, liver, lung, breast (women), cervix, ovary, and prostate in adults, and adult and childhood leukaemia. Standardised quality control procedures were applied; errors were corrected by the registry concerned. We estimated 5-year net survival, adjusted for background mortality in every country or region by age (single year), sex, and calendar year, and by race or ethnic origin in some countries. Estimates were age-standardised with the International Cancer Survival Standard weights.
FINDINGS: 5-year survival from colon, rectal, and breast cancers has increased steadily in most developed countries. For patients diagnosed during 2005-09, survival for colon and rectal cancer reached 60% or more in 22 countries around the world; for breast cancer, 5-year survival rose to 85% or higher in 17 countries worldwide. Liver and lung cancer remain lethal in all nations: for both cancers, 5-year survival is below 20% everywhere in Europe, in the range 15-19% in North America, and as low as 7-9% in Mongolia and Thailand. Striking rises in 5-year survival from prostate cancer have occurred in many countries: survival rose by 10-20% between 1995-99 and 2005-09 in 22 countries in South America, Asia, and Europe, but survival still varies widely around the world, from less than 60% in Bulgaria and Thailand to 95% or more in Brazil, Puerto Rico, and the USA. For cervical cancer, national estimates of 5-year survival range from less than 50% to more than 70%; regional variations are much wider, and improvements between 1995-99 and 2005-09 have generally been slight. For women diagnosed with ovarian cancer in 2005-09, 5-year survival was 40% or higher only in Ecuador, the USA, and 17 countries in Asia and Europe. 5-year survival for stomach cancer in 2005-09 was high (54-58%) in Japan and South Korea, compared with less than 40% in other countries. By contrast, 5-year survival from adult leukaemia in Japan and South Korea (18-23%) is lower than in most other countries. 5-year survival from childhood acute lymphoblastic leukaemia is less than 60% in several countries, but as high as 90% in Canada and four European countries, which suggests major deficiencies in the management of a largely curable disease.
INTERPRETATION: International comparison of survival trends reveals very wide differences that are likely to be attributable to differences in access to early diagnosis and optimum treatment. Continuous worldwide surveillance of cancer survival should become an indispensable source of information for cancer patients and researchers and a stimulus for politicians to improve health policy and health-care systems.
Resumo:
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.
Resumo:
In Britain, the majority of Lower and Middle Paleolithic archaeological finds come from river terrace deposits. The impressive “staircase” terrace sequences of southeast England, and research facilitated by aggregate extraction have provided a considerable body of knowledge about the terrace chronology and associated archaeology in that area. Such research has been essential in considering rates of uplift, climatic cycles, archaeological chronologies, and the landscapes in which hominins lived. It has also promoted the view that southeast England was a major hominin route into Britain. By contrast, the terrace deposits of the southwest have been little studied. The Palaeolithic Rivers of South West Britain (PRoSWEB) project employed a range of geoarchaeological methodologies to address similar questions at different scales, focusing on the rivers Exe, Axe, Otter, and the paleo-Doniford, all of which were located south of the maximum Pleistocene glacial limit (marine oxygen isotope stage [MIS] 4–2). Preliminary analysis of the fieldwork results suggests that although the evolution of these catchments is complex, most conform to a standard staircase-type model, with the exception of the Axe, and, to a lesser extent, the paleo-Doniford, which are anomalous. Although the terrace deposits are less extensive than in southeast Britain, differentiation between terraces does exist, and new dates show that some of these terraces are of great antiquity (MIS 10+). The project also reexamined the distribution of artifacts in the region and confirms the distributional bias to the river valleys, and particularly the rivers draining southward to the paleo–Channel River system. This distribution is consistent with a model of periodic occupation of the British peninsula along and up the major river valleys from the paleo–Channel River corridor. These data have a direct impact on our understanding of the paleolandscapes of the southwest region, and therefore our interpretations of the Paleolithic occupation of the edge of the continental landmass.
Resumo:
Although Answer Set Programming (ASP) is a powerful framework for declarative problem solving, it cannot in an intuitive way handle situations in which some rules are uncertain, or in which it is more important to satisfy some constraints than others. Possibilistic ASP (PASP) is a natural extension of ASP in which certainty weights are associated with each rule. In this paper we contrast two different views on interpreting the weights attached to rules. Under the first view, weights reflect the certainty with which we can conclude the head of a rule when its body is satisfied. Under the second view, weights reflect the certainty that a given rule restricts the considered epistemic states of an agent in a valid way, i.e. it is the certainty that the rule itself is correct. The first view gives rise to a set of weighted answer sets, whereas the second view gives rise to a weighted set of classical answer sets.
Resumo:
In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.
Resumo:
Bridge weigh-in-motion (B-WIM), a system that uses strain sensors to calculate the weights of trucks passing on bridges overhead, requires accurate axle location and speed information for effective performance. The success of a B-WIM system is dependent upon the accuracy of the axle detection method. It is widely recognised that any form of axle detector on the road surface is not ideal for B-WIM applications as it can cause disruption to the traffic (Ojio & Yamada 2002; Zhao et al. 2005; Chatterjee et al. 2006). Sensors under the bridge, that is Nothing-on-Road (NOR) B-WIM, can perform axle detection via data acquisition systems which can detect a peak in strain as the axle passes. The method is often successful, although not all bridges are suitable for NOR B-WIM due to limitations of the system. Significant research has been carried out to further develop the method and the NOR algorithms, but beam-and-slab bridges with deep beams still present a challenge. With these bridges, the slabs are used for axle detection, but peaks in the slab strains are sensitive to the transverse position of wheels on the beam. This next generation B-WIM research project extends the current B-WIM algorithm to the problem of axle detection and safety, thus overcoming the existing limitations in current state-of–the-art technology. Finite Element Analysis was used to determine the critical locations for axle detecting sensors and the findings were then tested in the field. In this paper, alternative strategies for axle detection were determined using Finite Element analysis and the findings were then tested in the field. The site selected for testing was in Loughbrickland, Northern Ireland, along the A1 corridor connecting the two cities of Belfast and Dublin. The structure is on a central route through the island of Ireland and has a high traffic volume which made it an optimum location for the study. Another huge benefit of the chosen location was its close proximity to a nearby self-operated weigh station. To determine the accuracy of the proposed B-WIM system and develop a knowledge base of the traffic load on the structure, a pavement WIM system was also installed on the northbound lane on the approach to the structure. The bridge structure selected for this B-WIM research comprised of 27 pre-cast prestressed concrete Y4-beams, and a cast in-situ concrete deck. The structure, a newly constructed integral bridge, spans 19 m and has an angle of skew of 22.7°.
Resumo:
This paper implements momentum among a host of market anomalies. Our investment universe consists of the 15 top (long-leg) and 15 bottom (short-leg) anomaly portfolios. The proposed active strategy buys (sells short) a subset of the top (bottom) anomaly portfolios based on past one-month return. The evidence shows statistically strong and economically meaningful persistence in anomaly payoffs. Our strategy consistently outperforms a naive benchmark that equal weights anomalies and yields an abnormal monthly return ranging between 1.27% and 1.47%. The persistence is robust to the post-2000 period, and various other considerations, and is stronger following episodes of high investor sentiment.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.
Resumo:
Neuropeptides such as neuropeptide Y (NPY) and vasoactive intestinal polypeptide (VIP) have been shown by our research group to be present in human dental pulp tissue. Neuropeptides cannot cross cell membranes and therefore to exert their biological effects they must bind to selected receptors on the surface of target cell membranes. However, the expression of receptor proteins for NPY and/or VIP have yet to be reported in human pulp tissue. The presence of neuropeptide receptors can be conveniently determined by Western blotting using specific anti-receptor antibodies. Objectives: The aim of this work was to identify the presence of the NPY Y1 receptor and the VIP receptor VPAC1 in human dental pulp tissue from both intact and carious teeth using Western blotting. Methods: Pulp tissue was collected from both intact and carious teeth and membrane preparations from these tissues were then subject to sodium dodecyl sulphate gel electrophoresis (SDS-PAGE), transferred to nitrocellulose and probed with specific antibodies to either the NPY Y1 receptor or the VPAC1 receptor. Results: Individual Western blotting experiments revealed the presence of immunoreactive bands corresponding to the known molecular weights of the NPY Y1 and VPAC1 receptor proteins in both intact and carious pulp samples. Conclusions: Demonstration of the presence of NPY Y1 and VPAC1 receptor protein expression in pulpal tissue from intact and carious teeth provides further support for the roles of these neuropeptides in pulpal health and disease.
Resumo:
The last three decades have seen social enterprises in the United Kingdom pushed to the forefront of welfare delivery, workfare and area-based regeneration. For critics, this is repositioning the sector around a neoliberal politics that privileges marketization, state roll-back and disciplining community groups to become more self-reliant. Successive governments have developed bespoke products, fiscal instruments and intermediaries to enable and extend the social finance market. Such assemblages are critical to roll-out tactics, but they are also necessary and useful for more reformist understandings of economic alterity. The issue is not social finance itself but how it is used, which inevitably entangles social enterprises in a form of legitimation crises between the need to satisfy financial returns and at the same time keep community interests on board. This paper argues that social finance, how it is used, politically domesticated and achieves re-distributional outcomes is a necessary component of counter-hegemonic strategies. Such assemblages are as important to radical community development as they are to neoliberalism and the analysis concludes by highlighting the need to develop a better understanding of finance, the ethics of its use and tactical compromises in scaling it as an alternative to public and private markets.
Resumo:
OBJECTIVE: The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA).
DESIGN: A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage.
SETTING: For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA.
SUBJECTS: Males and females aged 18-64 years (n 120).
RESULTS: The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods.
CONCLUSIONS: Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.