979 resultados para Adaptative large neighborhood search


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Like other regions of the world, the EU is developing biofuels in the transport sector to reduce oil consumption and mitigate climate change. To promote them, it has adopted favourable legislation since the 2000s. In 2009 it even decided to oblige each Member State to ensure that by 2020 the share of energy coming from renewable sources reached at least 10% of their final consumption of energy in the transport sector. Biofuels are considered the main instrument to reach that percentage since the development of other alternatives (such as hydrogen and electricity) will take much longer than expected. Meanwhile, these various legislative initiatives have driven the production and consumption of biofuels in the EU. Biofuels accounted for 4.7% of EU transport fuel consumption in 2011. They have also led to trade and investment in biofuels on a global scale. This large-scale expansion of biofuels has, however, revealed numerous negative impacts. These stem from the fact that first-generation biofuels (i.e., those produced from food crops), of which the most important types are biodiesel and bioethanol, are used almost exclusively to meet the EU’s renewable 10% target in transport. Their negative impacts are: socioeconomic (food price rises), legal (land-grabbing), environmental (for instance, water stress and water pollution; soil erosion; reduction of biodiversity), climatic (direct and indirect land-use effects resulting in more greenhouse gas emissions) and public finance issues (subsidies and tax relief). The extent of such negative impacts depends on how biofuel feedstocks are produced and processed, the scale of production, and in particular, how they influence direct land use change (DLUC) and indirect land use change (ILUC) and the international trade. These negative impacts have thus provoked mounting debates in recent years, with a particular focus on ILUC. They have forced the EU to re-examine how it deals with biofuels and submit amendments to update its legislation. So far, the EU legislation foresees that only sustainable biofuels (produced in the EU or imported) can be used to meet the 10% target and receive public support; and to that end, mandatory sustainability criteria have been defined. Yet they have a huge flaw. Their measurement of greenhouse gas savings from biofuels does not take into account greenhouse gas emissions resulting from ILUC, which represent a major problem. The Energy Council of June 2014 agreed to set a limit on the extent to which firstgeneration biofuels can count towards the 10% target. But this limit appears to be less stringent than the ones made previously by the European Commission and the European Parliament. It also agreed to introduce incentives for the use of advanced (second- and third-generation) biofuels which would be allowed to count double towards the 10% target. But this again appears extremely modest by comparison with what was previously proposed. Finally, the approach chosen to take into account the greenhouse gas emissions due to ILUC appears more than cautious. The Energy Council agreed that the European Commission will carry out a reporting of ILUC emissions by using provisional estimated factors. A review clause will permit the later adjustment of these ILUC factors. With such legislative orientations made by the Energy Council, one cannot consider yet that there is a major shift in the EU biofuels policy. Bolder changes would have probably meant risking the collapse of the high-emission conventional biodiesel industry which currently makes up the majority of Europe’s biofuel production. The interests of EU farmers would have also been affected. There is nevertheless a tension between these legislative orientations and the new Commission’s proposals beyond 2020. In any case, many uncertainties remain on this issue. As long as solutions have not been found to minimize the important collateral damages provoked by the first generation biofuels, more scientific studies and caution are needed. Meanwhile, it would be wise to improve alternative paths towards a sustainable transport sector, i.e., stringent emission and energy standards for all vehicles, better public transport systems, automobiles that run on renewable energy other than biofuels, or other alternatives beyond the present imagination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the discovery of a large-scale wall in the direction of Abell 22. Using photometric and spectroscopic data from the Las Campanas Observatory and Anglo-Australian Telescope Rich Cluster Survey, Abell 22 is found to exhibit a highly unusual and striking redshift distribution. We show, by examining the galaxy distributions both in redshift space and on the colour-magnitude plane, that Abell 22 exhibits a foreground wall-like structure. A search for other galaxies and clusters in the nearby region using the 2dF Galaxy Redshift Survey data base suggests that the wall-like structure is a significant large-scale, non-virialized filament which runs between two other Abell clusters either side of Abell 22. The filament stretches over at least > 40 h(-1) Mpc in length and 10 h(-1) Mpc in width at the redshift of Abell 22.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article explores consumer Web-search satisfaction. It commences with a brief overview of the concepts consumer information search and consumer satisfaction. Consumer Web adoption issues are then briefly discussed and the importance of consumer search satisfaction is highlighted in relation to the adoption of the Web as an additional source of consumer information. Research hypotheses are developed and the methodology of a large scale consumer experiment to record consumer Web search behaviour is described. The hypotheses are tested and the data explored in relation to post-Web-search satisfaction. The results suggest that consumer post-Web-search satisfaction judgments may be derived from subconscious judgments of Web search efficiency, an empirical calculation of which is problematic in unlimited information environments such as the Web. The results are discussed and a future research agenda is briefly outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aimed to provide a comparative analysis of South Asian and White British students in their academic attainment at school and university and in their search for employment. Data were gathered by using a variety of methodological techniques. Completed postal questionnaires were received from 301 South Asian and White British undergraduates from 12 British universities, who were in their final year of study in 1985. In depth interviews were also conducted with 49 graduates who were a self selected group from the original sample. Additional information was also collected by using diary report forms and by administering a second postal questionnaire to selected South Asian and White British participants. It was found that while the pre-university qualifications of the White British and South Asian undergraduates did not differ considerably, many members in the latter group had travelled a more arduous path to academic success. For some South Asians, school experiences included the confrontation of racist attitudes and behaviour, both from teachers and peers. The South Asian respondents in this study were more likely than their White British counterparts, to have attempted some C.S.E. examinations, obtained some of their `O' levels in the Sixth Form and retaken their `A' levels. As a result the South Asians were on average older than their White British peers when entering university. A small sample of South Asians also found that the effects of racism were perpetuated in higher education where they faced difficulty both academically and socially. Overall, however, since going to university most South Asians felt further drawn towards their `cultural background', this often being their own unique view of `Asianess'. Regarding their plans after graduation, it was found that South Asians were more likely to opt for further study, believing that they needed to be better qualified than their White British counterparts. For those South Asians who were searching for work, it was noted that they were better qualified, willing to accept a lower minimum salary, had made more job applications and had started searching for work earlier than the comparable White British participants. Also, although generally they were not having difficulty in obtaining interviews, South Asian applicants were less likely to receive an offer of employment. In the final analysis examining their future plans, it was found that a large proportion of South Asian graduates were aspiring towards self employment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research has shown that adults with dyslexia (AwD) are disproportionately impacted by close spacing of stimuli and increased numbers of distractors in a visual search task compared to controls [1]. Using an orientation discrimination task, the present study extended these findings to show that even in conditions where target search was not required: (i) AwD had detrimental effects of both crowding and increased numbers of distractors; (ii) AwD had more pronounced difficulty with distractor exclusion in the left visual field and (iii) measures of crowding and distractor exclusion correlated significantly with literacy measures. Furthermore, such difficulties were not accounted for by the presence of covarying symptoms of ADHD in the participant groups. These findings provide further evidence to suggest that the ability to exclude distracting stimuli likely contributes to the reported visual attention difficulties in AwD and to the aetiology of literacy difficulties. The pattern of results is consistent with weaker and asymmetric attention in AwD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Qualitative research makes an important contribution to our understanding of health and healthcare. However, qualitative evidence can be difficult to search for and identify, and the effectiveness of different types of search strategies is unknown. Methods Three search strategies for qualitative research in the example area of support for breast-feeding were evaluated using six electronic bibliographic databases. The strategies were based on using thesaurus terms, free-text terms and broad-based terms. These strategies were combined with recognised search terms for support for breast-feeding previously used in a Cochrane review. For each strategy, we evaluated the recall (potentially relevant records found) and precision (actually relevant records found). Results A total yield of 7420 potentially relevant records was retrieved by the three strategies combined. Of these, 262 were judged relevant. Using one strategy alone would miss relevant records. The broad-based strategy had the highest recall and the thesaurus strategy the highest precision. Precision was generally poor: 96% of records initially identified as potentially relevant were deemed irrelevant. Searching for qualitative research involves trade-offs between recall and precision. Conclusions These findings confirm that strategies that attempt to maximise the number of potentially relevant records found are likely to result in a large number of false positives. The findings also suggest that a range of search terms is required to optimise searching for qualitative evidence. This underlines the problems of current methods for indexing qualitative research in bibliographic databases and indicates where improvements need to be made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for new heavy resonances decaying to boson pairs (WZ, WW or ZZ) using 20.3 inverse femtobarns of proton-proton collision data at a center of mass energy of 8 TeV is presented. The data were recorded by the ATLAS detector at the Large Hadron Collider (LHC) in 2012. The analysis combines several search channels with the leptonic, semi-leptonic and fully hadronic final states. The diboson invariant mass spectrum is studied for local excesses above the Standard Model background prediction, and no significant excess is observed for the combined analysis. 95$\%$ confidence limits are set on the cross section times branching ratios for three signal models: an extended gauge model with a heavy W boson, a bulk Randall-Sundrum model with a spin-2 graviton, and a simplified model with a heavy vector triplet. Among the individual search channels, the fully-hadronic channel is predominantly presented where boson tagging technique and jet substructure cuts are used. Local excesses are found in the dijet mass distribution around 2 TeV, leading to a global significance of 2.5 standard deviations. This deviation from the Standard Model prediction results in many theory explanations, and the possibilities could be further explored using the LHC Run 2 data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Searches for the supersymmetric partner of the top quark (stop) are motivated by natural supersymmetry, where the stop has to be light to cancel the large radiative corrections to the Higgs boson mass. This thesis presents three different searches for the stop at √s = 8 TeV and √s = 13 TeV using data from the ATLAS experiment at CERN’s Large Hadron Collider. The thesis also includes a study of the primary vertex reconstruction performance in data and simulation at √s = 7 TeV using tt and Z events. All stop searches presented are carried out in final states with a single lepton, four or more jets and large missing transverse energy. A search for direct stop pair production is conducted with 20.3 fb−1 of data at a center-of-mass energy of √s = 8 TeV. Several stop decay scenarios are considered, including those to a top quark and the lightest neutralino and to a bottom quark and the lightest chargino. The sensitivity of the analysis is also studied in the context of various phenomenological MSSM models in which more complex decay scenarios can be present. Two different analyses are carried out at √s = 13 TeV. The first one is a search for both gluino-mediated and direct stop pair production with 3.2 fb−1 of data while the second one is a search for direct stop pair production with 13.2 fb−1 of data in the decay scenario to a bottom quark and the lightest chargino. The results of the analyses show no significant excess over the Standard Model predictions in the observed data. Consequently, exclusion limits are set at 95% CL on the masses of the stop and the lightest neutralino.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les télescopes de grande envergure requièrent des nouvelles technologies ayant un haut niveau de maturité technologique. Le projet implique la création d’un banc de test d’optique adaptative pour l’évaluation de la performance sur le ciel de dispositifs connexes. Le banc a été intégré avec succès à l’observatoire du Mont Mégantic, et a été utilisé pour évaluer la performance d’un senseur pyramidal de front d’onde. Le système a permis la réduction effective de la fonction d’étalement du point d’un facteur deux. Plusieurs améliorations sont possibles pour augmenter la performance du système.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Individuals living in highly networked societies publish a large amount of personal, and potentially sensitive, information online. Web investigators can exploit such information for a variety of purposes, such as in background vetting and fraud detection. However, such investigations require a large number of expensive man hours and human effort. This paper describes InfoScout, a search tool which is intended to reduce the time it takes to identify and gather subject centric information on the Web. InfoScout collects relevance feedback information from the investigator in order to rerank search results, allowing the intended information to be discovered more quickly. Users may still direct their search as they see fit, issuing ad-hoc queries and filtering existing results by keywords. Design choices are informed by prior work and industry collaboration.