873 resultados para Exploit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrated Communication Strategy for MyLabel Following paper presents Integrated Communication Strategy for Continente’s private label brand of cosmetics MyLabel. The main purpose of the project is to position MyLabel as venture brand which will gain strong market position in order to compete with the manufacturer brands. Therefore, based on the created brand equity model for the venture cosmetic brand, MyLabel will be approached from the branding perspective in order to improve perceived quality and consecutively, build brand recognition and credibility. In this respect, integrated communication strategy includes some of the branding tactics and marketing communication mix. Thereafter, MyLabel will be transformed into the sub-brand MyBeauty, which can exploit opportunities given by the new market image of retailers’ brands and gain special, unique position in the market.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing wireless networks are characterized by a fixed spectrum assignment policy. However, the scarcity of available spectrum and its inefficient usage demands for a new communication paradigm to exploit the existing spectrum opportunistically. Future Cognitive Radio (CR) devices should be able to sense unoccupied spectrum and will allow the deployment of real opportunistic networks. Still, traditional Physical (PHY) and Medium Access Control (MAC) protocols are not suitable for this new type of networks because they are optimized to operate over fixed assigned frequency bands. Therefore, novel PHY-MAC cross-layer protocols should be developed to cope with the specific features of opportunistic networks. This thesis is mainly focused on the design and evaluation of MAC protocols for Decentralized Cognitive Radio Networks (DCRNs). It starts with a characterization of the spectrum sensing framework based on the Energy-Based Sensing (EBS) technique considering multiple scenarios. Then, guided by the sensing results obtained by the aforementioned technique, we present two novel decentralized CR MAC schemes: the first one designed to operate in single-channel scenarios and the second one to be used in multichannel scenarios. Analytical models for the network goodput, packet service time and individual transmission probability are derived and used to compute the performance of both protocols. Simulation results assess the accuracy of the analytical models as well as the benefits of the proposed CR MAC schemes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Positioning technologies are becoming ubiquitous and are being used more and more frequently for supporting a large variety of applica- tions. For outdoor applications, global navigation satellite systems (GNSSs), such as the global positioning system (GPS), are the most common and popular choice because of their wide coverage. GPS is also augmented with network-based systems that exploit existing wireless and mobile networks for providing positioning functions where GPS is not available or to save energy in battery-powered devices. Indoors, GNSSs are not a viable solution, but many applications require very accurate, fast, and exible positioning, tracking, and navigation functions. These and other requirements have stim- ulated research activities, in both industry and academia, where a variety of fundamental principles, techniques, and sensors are being integrated to provide positioning functions to many applications. The large majority of positioning technologies is for indoor environments, and most of the existing commercial products have been developed for use in of ce buildings, airports, shopping malls, factory plants, and similar spaces. There are, however, other spaces where positioning, tracking, and navigation systems play a central role in safety and in rescue operations, as well as in supporting speci c activities or for scienti c research activities in other elds. Among those spaces are underground tunnels, mines, and even underwater wells and caves. This chapter describes the research efforts over the past few years that have been put into the development of positioning systems for underground tun- nels, with particular emphasis in the case of the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research), where localiza- tion aims at enabling more automatic and unmanned radiation surveys. Examples of positioning and localization systems that have been devel- oped in the past few years for underground facilities are presented in the fol- lowing section, together with a brief characterization of those spaces’ special conditions and the requirements of some of the most common applications. Section 5.2 provides a short overview of some of the most representative research efforts that are currently being carried out by many research teams around the world. In addition, some of the fundamental principles and tech- niques are identi ed, such as the use of leaky coaxial cables, as used at the LHC. In Section 5.3, we introduce the speci c environment of the LHC and de ne the positioning requirements for the envisaged application. This is followed by a detailed description of our approach and the results that have been achieved so far. Some last comments and remarks are presented in a nal section.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research and development around indoor positioning and navigation is capturing the attention of an increasing number of research groups and labs around the world. Among the several techniques being proposed for indoor positioning, solutions based on Wi-Fi fingerprinting are the most popular since they exploit existing WLAN infrastructures to support software-only positioning, tracking and navigation applications. Despite the enormous research efforts in this domain, and despite the existence of some commercial products based on Wi-Fi fingerprinting, it is still difficult to compare the performance, in the real world, of the several existing solutions. The EvAAL competition, hosted by the IPIN 2015 conference, contributed to fill this gap. This paper describes the experience of the RTLS@UM team in participating in track 3 of that competition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the past decade, the research community has been dedicating considerable effort into indoor positioning systems based on Wi-Fi fingerprinting techniques, mainly due to their capability to exploit existing infrastructures. Crowdsourcing approaches, also known as organic, have been proposed recently to address the problem of creating and maintaining the corresponding radio maps. In these organic systems, the users of the system build the radio map themselves while using it to estimate their own position/location. However, most of these collaborative methods, proposed by several authors, assume that all the users are honest and committed to contribute to a good quality radio map. In this paper we assess the quality of a radio map built collaboratively and propose a method to classify the credibility of individual contributions and the reputation of individual users. Experimental results are presented for an organic indoor location system that has been used by more than one hundred users over a period of around 12 months.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Doctoral Thesis (PhD Programm on Molecular and Environmental Biology)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cancer cells rely mostly on glycolysis to meet their energetic demands, producing large amounts of lactate that are extruded to the tumour microenvironment by monocarboxylate transporters (MCTs). The role of MCTs in the survival of colorectal cancer (CRC) cells is scarce and poorly understood. In this study, we aimed to better understand this issue and exploit these transporters as novel therapeutic targets alone or in combination with the CRC classical chemotherapeutic drug 5-Fluorouracil. For that purpose, we characterized the effects of MCT activity inhibition in normal and CRC derived cell lines and assessed the effect of MCT inhibition in combination with 5-FU. Here, we demonstrated that MCT inhibition using CHC (a-cyano-4-hydroxycinnamic acid), DIDS (4,4'-diisothiocyanatostilbene-2,2'-disulphonic acid) and quercetin decreased cell viability, disrupted the glycolytic phenotype, inhibited proliferation and enhanced cell death in CRC cells. These results were confirmed by specific inhibition of MCT1/4 by RNA interference. Notably, we showed that 5-FU cytotoxicity was potentiated by lactate transport inhibition in CRC cells, either by activity inhibition or expression silencing. These findings provide novel evidence for the pivotal role of MCTs in CRC maintenance and survival, as well as for the use of these transporters as potential new therapeutic targets in combination with CRC conventional therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese de Doutoramento em Biologia Molecular e Ambiental (área de especialização em Biologia Celular e Saúde).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório de estágio de mestrado em Arqueologia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work focused on how different types of oil phase, MCT (medium chain triglycerides) and LCT (long chain triglycerides), exert influence on the gelation process of beeswax and thus properties of the organogel produced thereof. Organogels were produced at different temperatures and qualitative phase diagrams were constructed to identify and classify the type of structure formed at various compositions. The microstructure of gelator crystals was studied by polarized light microscopy. Melting and crystallization were characterized by differential scanning calorimetry and rheology (flow and small amplitude oscillatory measurements) to understand organogels' behaviour under different mechanical and thermal conditions. FTIR analysis was employed for a further understanding of oil-gelator chemical interactions. Results showed that the increase of beeswax concentration led to higher values of storage and loss moduli (G, G) and complex modulus (G*) of organogels, which is associated to the strong network formed between the crystalline gelator structure and the oil phase. Crystallization occurred in two steps (well evidenced for higher concentrations of gelator) during temperature decreasing. Thermal analysis showed the occurrence of hysteresis between melting and crystallization. Small angle X-ray scattering (SAXS) analysis allowed a better understanding in terms of how crystal conformations were disposed for each type of organogel. The structuring process supported by medium or long-chain triglycerides oils was an important exploit to apprehend the impact of different carbon chain-size on the gelation process and on gels' properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper aims at assessing the importance of the initial technological endowments when firms decide to establish a technological agreement. We propose a Bertrand duopoly model where firms evaluate the advantages they can get from the agreement according to its length. Allowing them to exploit a learning process, we depict a strict connection between the starting point and the final result. Moreover, as far as learning is evaluated as an iterative process, the set of initial conditions that lead to successful ventures switches from a continuum of values to a Cantor set.