798 resultados para User-generated advertising
Resumo:
Next Generation Access Networks (NGAN) are the new step forward to deliver broadband services and to facilitate the integration of different technologies. It is plausible to assume that, from a technological standpoint, the Future Internet will be composed of long-range high-speed optical networks; a number of wireless networks at the edge; and, in between, several access technologies, among which, the Passive Optical Networks (xPON) are very likely to succeed, due to their simplicity, low-cost, and increased bandwidth. Among the different PON technologies, the Ethernet-PON (EPON) is the most promising alternative to satisfy operator and user needs, due to its cost, flexibility and interoperability with other technologies. One of the most interesting challenges in such technologies relates to the scheduling and allocation of resources in the upstream (shared) channel. The aim of this research project is to study and evaluate current contributions and propose new efficient solutions to address the resource allocation issues in Next Generation EPON (NG-EPON). Key issues in this context are future end-user needs, integrated quality of service (QoS) support and optimized service provisioning for real time and elastic flows. This project will unveil research opportunities, issue recommendations and propose novel mechanisms associated with the convergence within heterogeneous access networks and will thus serve as a basis for long-term research projects in this direction. The project has served as a platform for the generation of new concepts and solutions that were published in national and international conferences, scientific journals and also in book chapter. We expect some more research publications in addition to the ones mentioned to be generated in a few months.
Resumo:
Our work is concerned with user modelling in open environments. Our proposal then is the line of contributions to the advances on user modelling in open environments thanks so the Agent Technology, in what has been called Smart User Model. Our research contains a holistic study of User Modelling in several research areas related to users. We have developed a conceptualization of User Modelling by means of examples from a broad range of research areas with the aim of improving our understanding of user modelling and its role in the next generation of open and distributed service environments. This report is organized as follow: In chapter 1 we introduce our motivation and objectives. Then in chapters 2, 3, 4 and 5 we provide the state-of-the-art on user modelling. In chapter 2, we give the main definitions of elements described in the report. In chapter 3, we present an historical perspective on user models. In chapter 4 we provide a review of user models from the perspective of different research areas, with special emphasis on the give-and-take relationship between Agent Technology and user modelling. In chapter 5, we describe the main challenges that, from our point of view, need to be tackled by researchers wanting to contribute to advances in user modelling. From the study of the state-of-the-art follows an exploratory work in chapter 6. We define a SUM and a methodology to deal with it. We also present some cases study in order to illustrate the methodology. Finally, we present the thesis proposal to continue the work, together with its corresponding work scheduling and temporalisation
Resumo:
The explosive growth of Internet during the last years has been reflected in the ever-increasing amount of the diversity and heterogeneity of user preferences, types and features of devices and access networks. Usually the heterogeneity in the context of the users which request Web contents is not taken into account by the servers that deliver them implying that these contents will not always suit their needs. In the particular case of e-learning platforms this issue is especially critical due to the fact that it puts at stake the knowledge acquired by their users. In the following paper we present a system that aims to provide the dotLRN e-learning platform with the capability to adapt to its users context. By integrating dotLRN with a multi-agent hypermedia system, online courses being undertaken by students as well as their learning environment are adapted in real time
Resumo:
A pool of oligonucleotides encoding a start methionine and nine random amino acids was inserted at the 5'-end of the gene for the yeast cytochrome oxidase subunit IV lacking its own mitochondrial targeting sequence. Approximately one-quarter of the randomly generated sequences targeted subunit IV to its correct intramitochondrial location in vivo. Sequence analysis of 89 randomly generated sequences showed that their efficiencies as mitochondrial targeting signals correlated with the potential to fold into an amphiphilic alpha-helix. Functional targeting sequences were enriched in arginine and isoleucine residues but contained few aspartate, glutamate, and proline residues. Nonfunctional sequences predicted to have significant helical amphiphilicity often had at least one acidic or multiple helix-breaking residues that would be expected to interfere with targeting functioning. These results support the hypothesis that the signal for targeting a protein into the mitochondrial matrix is usually a positively charged amphiphilic helix.
Resumo:
The optimization of the pilot overhead in single-user wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-used block fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.
Resumo:
The simultaneous use of multiple transmit and receive antennas can unleash very large capacity increases in rich multipath environments. Although such capacities can be approached by layered multi-antenna architectures with per-antenna rate control, the need for short-term feedback arises as a potential impediment, in particular as the number of antennas—and thus the number of rates to be controlled—increases. What we show, however, is that the need for short-term feedback in fact vanishes as the number of antennas and/or the diversity order increases. Specifically, the rate supported by each transmit antenna becomes deterministic and a sole function of the signal-to-noise, the ratio of transmit and receive antennas, and the decoding order, all of which are either fixed or slowly varying. More generally, we illustrate -through this specific derivation— the relevance of some established random CDMA results to the single-user multi-antenna problem.
Resumo:
This paper formulates power allocation policies that maximize the region of mutual informationsachievable in multiuser downlink OFDM channels. Arbitrary partitioning ofthe available tones among users and arbitrary modulation formats, possibly different forevery user, are considered. Two distinct policies are derived, respectively for slow fadingchannels tracked instantaneously by the transmitter and for fast fading channels knownonly statistically thereby. With instantaneous channel tracking, the solution adopts theform of a multiuser mercury/waterfilling procedure that generalizes the single-user mercury/waterfilling introduced in [1, 2]. With only statistical channel information, in contrast,the mercury/waterfilling interpretation is lost. For both policies, a number of limitingregimes are explored and illustrative examples are provided.
Resumo:
This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.
Resumo:
This project explores the user costs and benefits of winter road closures. Severe winter weather makes travel unsafe and dramatically increases crash rates. When conditions become unsafe due to winter weather, road closures should allow users to avoid crash costs and eliminate costs associated with rescuing stranded motorists. Therefore, the benefits of road closures are the avoided safety costs. The costs of road closures are the delays that are imposed on motorists and motor carriers who would have made the trip had the road not been closed. This project investigated the costs and benefits of road closures and found that evaluating the benefits and costs is not as simple as it appears. To better understand the costs and benefits of road closures, the project investigates the literature, conducts interviews with shippers and motor carriers, and conducts case studies of road closures to determine what actually occurred on roadways during closures. The project also estimates a statistical model that relates weather severity to crash rates. Although, the statistical model is intended to illustrate the possibility to quantitatively relate measurable and predictable weather conditions to the safety performance of a roadway. In the future, weather conditions such as snow fall intensity, visibility, etc., can be used to make objective measures of the safety performance of a roadway rather than relying on subjective evaluations of field staff. The review of the literature and the interviews clearly illustrate that not all delays (increased travel time) are valued the same. Expected delays (routine delays) are valued at the generalized costs (value of the driver’s time, fuel, insurance, wear and tear on the vehicle, etc.), but unexpected delays are valued much higher because they result in interruption of synchronous activities at the trip’s destination. To reduce the costs of delays resulting from road closures, public agencies should communicate as early as possible the likelihood of a road closure.
Resumo:
For single-user MIMO communication with uncoded and coded QAM signals, we propose bit and power loading schemes that rely only on channel distribution information at the transmitter. To that end, we develop the relationship between the average bit error probability at the output of a ZF linear receiver and the bit rates and powers allocated at the transmitter. This relationship, and the fact that a ZF receiver decouples the MIMO parallel channels, allow leveraging bit loading algorithms already existing in the literature. We solve dual bit rate maximization and power minimization problems and present performance resultsthat illustrate the gains of the proposed scheme with respect toa non-optimized transmission.
Resumo:
This paper presents the platform developed in the PANACEA project, a distributed factory that automates the stages involved in the acquisition, production, updating and maintenance of Language Resources required by Machine Translation and other Language Technologies. We adopt a set of tools that have been successfully used in the Bioinformatics field, they are adapted to the needs of our field and used to deploy web services, which can be combined to build more complex processing chains (workflows). This paper describes the platform and its different components (web services, registry, workflows, social network and interoperability). We demonstrate the scalability of the platform by carrying out a set of massive data experiments. Finally, a validation of the platform across a set of required criteria proves its usability for different types of users (non-technical users and providers).
Resumo:
This paper presents a webservice architecture for Statistical Machine Translation aimed at non-technical users. A workfloweditor allows a user to combine different webservices using a graphical user interface. In the current state of this project,the webservices have been implemented for a range of sentential and sub-sententialaligners. The advantage of a common interface and a common data format allows the user to build workflows exchanging different aligners.
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
The following dissertation proposes a qualitative approach on the matter ofadvertising, society and construction of identity, based on the effect of householdappliances commercials in constructing female identity of Spanish women today.Conclusions will be drawn based on a juxtaposition of social background andadvertising content and on how Spanish women of today perceive the evolution offemale imagery depicted in advertisements. The aim is to demonstrate how muchcommercials mirrors society and how far it reinforces paradigms no longer existing