919 resultados para Drop on Demand


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We report the discovery of a new transiting planet in the southern hemisphere. It was found by the WASP-south transit survey and confirmed photometrically and spectroscopically by the 1.2 m Swiss Euler telescope, LCOGT 2m Faulkes South Telescope, the 60 cm TRAPPIST telescope, and the ESO 3.6 m telescope. The orbital period of the planet is 2.94 days. We find that it is a gas giant with a mass of 0.88 ± 0.10 MJ and an estimated radius of 0.96 ± 0.05 RJ. We obtained spectra during transit with the HARPS spectrograph and detect the Rossiter-McLaughlin effect despite its small amplitude. Because of the low signal-to-noise ratio of the effect and a small impact parameter, we cannot place a strong constraint on the projected spin-orbit angle. We find two conflicting values for the stellar rotation. We find, via spectral line broadening, that v sin I = 2.2 ± 0.3 km s-1, while applying another method, based on the activity level using the index log R'_HK, gives an equatorial rotation velocity of only v = 1.35 ± 0.20 km s-1. Using these as priors in our analysis, the planet might be either misaligned or aligned. This result raises doubts about the use of such priors. There is evidence of neither eccentricity nor any radial velocity drift with time. Using WASP-South photometric observations confirmed with LCOGT Faulkes South Telescope, the 60 cm TRAPPIST telescope, the CORALIE spectrograph and the camera from the Swiss 1.2 m Euler Telescope placed at La Silla, Chile, as well as with the HARPS spectrograph, mounted on the ESO 3.6 m, also at La Silla, under proposal 084.C-0185. The data is publicly available at the CDS Strasbourg and on demand to the main author.RV data is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/531/A24Appendix is available in electronic form at http://www.aanda.org

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Laughter is a frequently occurring social signal and an important part of human non-verbal communication. However it is often overlooked as a serious topic of scientific study. While the lack of research in this area is mostly due to laughter’s non-serious nature, it is also a particularly difficult social signal to produce on demand in a convincing manner; thus making it a difficult topic for study in laboratory settings. In this paper we provide some techniques and guidance for inducing both hilarious laughter and conversational laughter. These techniques were devised with the goal of capturing mo- tion information related to laughter while the person laughing was either standing or seated. Comments on the value of each of the techniques and general guidance as to the importance of atmosphere, environment and social setting are provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

By 2015, with the proliferation of wireless multimedia applications and services (e.g., mobile TV, video on demand, online video repositories, immersive video interaction, peer to peer video streaming, and interactive video gaming), and any-time anywhere communication, the number of smartphones and tablets will exceed 6.5 billion as the most common web access devices. Data volumes in wireless multimedia data-intensive applications and mobile web services are projected to increase by a factor of 10 every five years, associated with a 20 percent increase in energy consumption, 80 percent of which is multimedia traffic related. In turn, multimedia energy consumption is rising at 16 percent per year, doubling every six years. It is estimated that energy costs alone account for as much as half of the annual operating expenditure. This has prompted concerted efforts by major operators to drastically reduce carbon emissions by up to 50 percent over the next 10 years. Clearly, there is an urgent need for new disruptive paradigms of green media to bridge the gap between wireless technologies and multimedia applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A huge variety of proteins are able to form fibrillar structures(1), especially at high protein concentrations. Hence, it is surprising that spider silk proteins can be stored in a soluble form at high concentrations and transformed into extremely stable fibres on demand(2,3). Silk proteins are reminiscent of amphiphilic block copolymers containing stretches of polyalanine and glycine-rich polar elements forming a repetitive core flanked by highly conserved non-repetitive amino-terminal(4,5) and carboxy-terminal(6) domains. The N-terminal domain comprises a secretion signal, but further functions remain unassigned. The C-terminal domain was implicated in the control of solubility and fibre formation(7) initiated by changes in ionic composition(8,9) and mechanical stimuli known to align the repetitive sequence elements and promote beta-sheet formation(10-14). However, despite recent structural data(15), little is known about this remarkable behaviour in molecular detail. Here we present the solution structure of the C-terminal domain of a spider dragline silk protein and provide evidence that the structural state of this domain is essential for controlled switching between the storage and assembly forms of silk proteins. In addition, the C-terminal domain also has a role in the alignment of secondary structural features formed by the repetitive elements in the backbone of spider silk proteins, which is known to be important for the mechanical properties of the fibre.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 5G network infrastructure is driven by the evolution of today's most demanding applications. Already, multimedia applications such as on-demand HD video and IPTV require gigabit- per-second throughput and low delay, while future technologies include ultra HDTV and machine-to-machine communication. Mm-Wave technologies such as IEEE 802.15.3c and IEEE 802.11ad are ideal candidates to deliver high throughput to multiple users demanding differentiated QoS. Optimization is often used as a methodology to meet throughput and delay constraints. However, traditional optimization techniques are not suited to a mixed set of multimedia applications. Particle swarm optimization (PSO) is shown as a promising technique in this context. Channel-time allocation PSO (CTA-PSO) is successfully shown here to allocate resource even in scenarios where blockage of the 60 GHz signal poses significant challenges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: Vocational training (VT) is a mandatory requirement for all UK dental graduates prior to entering NHS practice. The VT period provides structured, supervised experience supported by study days and interaction with peers. It is not compulsory for Irish dental graduates working in either Ireland or the UK to undertake VT but yet a proportion voluntarily do so each year.

Objectives: This study was designed to explore the choices made by Irish dental graduates. It aimed to record any benefits of VT and its impact upon future career choices.

Method: A self-completion questionnaire was developed and piloted before being circulated electronically to recent dental graduates from University College Cork. After collecting demographic information respondents were asked to indicate if they pursued vocational training on graduation, give their perception of their post-graduation experience, describe their current work profile and detail any formal postgraduate studies.

Results: 35% of respondents opted to undertake VT and 79% did so in the UK. Those who completed VT regarded it as a very positive experience with benefits including: working in a positive learning environment, help on demand and interaction with peers. Of those who chose VT, 49% have pursued some form of further formal postgraduate study as compared to 40% of those who did not. All of the respondents who completed VT indicated they would recommend it to current Irish graduates. The majority of those who took up an associate position immediately after graduation reported that this was beneficial but up to three quarters would recommend current graduates undertake VT and 45% would now chose to do so themselves.

Conclusions: Increasing numbers of Irish graduates are moving to the UK to undertake VT and they find it a beneficial experience. In addition, those who undertook VT were more likely to undertake formal postgraduate study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

How can GPU acceleration be obtained as a service in a cluster? This question has become increasingly significant due to the inefficiency of installing GPUs on all nodes of a cluster. The research reported in this paper is motivated to address the above question by employing rCUDA (remote CUDA), a framework that facilitates Acceleration-as-a-Service (AaaS), such that the nodes of a cluster can request the acceleration of a set of remote GPUs on demand. The rCUDA framework exploits virtualisation and ensures that multiple nodes can share the same GPU. In this paper we test the feasibility of the rCUDA framework on a real-world application employed in the financial risk industry that can benefit from AaaS in the production setting. The results confirm the feasibility of rCUDA and highlight that rCUDA achieves similar performance compared to CUDA, provides consistent results, and more importantly, allows for a single application to benefit from all the GPUs available in the cluster without loosing efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In-Memory Databases (IMDBs), such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand. Therefore, the goal is now to provide near real-time responses to compute and data intensive analytic queries. To facilitate this, much work has investigated the use of acceleration technologies within the database context. While current research into the application of these technologies has yielded positive results, they have tended to focus on single database tasks or on isolated single user requests. This paper uses SHEPARD, a framework for managing accelerated tasks across shared heterogeneous resources, to introduce acceleration into an IMDB. Results show how, using SHEPARD, multiple simultaneous user queries all receive speed-up by using a shared pool of accelerators. Results also show that offloading analytic tasks onto accelerators can have indirect benefits for other database workloads by reducing contention for CPU resources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Network virtualisation is seen as a promising approach to overcome the so-called “Internet impasse” and bring innovation back into the Internet, by allowing easier migration towards novel networking approaches as well as the coexistence of complementary network architectures on a shared infrastructure in a commercial context. Recently, the interest from the operators and mainstream industry in network virtualisation has grown quite significantly, as the potential benefits of virtualisation became clearer, both from an economical and an operational point of view. In the beginning, the concept has been mainly a research topic and has been materialized in small-scale testbeds and research network environments. This PhD Thesis aims to provide the network operator with a set of mechanisms and algorithms capable of managing and controlling virtual networks. To this end, we propose a framework that aims to allocate, monitor and control virtual resources in a centralized and efficient manner. In order to analyse the performance of the framework, we performed the implementation and evaluation on a small-scale testbed. To enable the operator to make an efficient allocation, in real-time, and on-demand, of virtual networks onto the substrate network, it is proposed a heuristic algorithm to perform the virtual network mapping. For the network operator to obtain the highest profit of the physical network, it is also proposed a mathematical formulation that aims to maximize the number of allocated virtual networks onto the physical network. Since the power consumption of the physical network is very significant in the operating costs, it is important to make the allocation of virtual networks in fewer physical resources and onto physical resources already active. To address this challenge, we propose a mathematical formulation that aims to minimize the energy consumption of the physical network without affecting the efficiency of the allocation of virtual networks. To minimize fragmentation of the physical network while increasing the revenue of the operator, it is extended the initial formulation to contemplate the re-optimization of previously mapped virtual networks, so that the operator has a better use of its physical infrastructure. It is also necessary to address the migration of virtual networks, either for reasons of load balancing or for reasons of imminent failure of physical resources, without affecting the proper functioning of the virtual network. To this end, we propose a method based on cloning techniques to perform the migration of virtual networks across the physical infrastructure, transparently, and without affecting the virtual network. In order to assess the resilience of virtual networks to physical network failures, while obtaining the optimal solution for the migration of virtual networks in case of imminent failure of physical resources, the mathematical formulation is extended to minimize the number of nodes migrated and the relocation of virtual links. In comparison with our optimization proposals, we found out that existing heuristics for mapping virtual networks have a poor performance. We also found that it is possible to minimize the energy consumption without penalizing the efficient allocation. By applying the re-optimization on the virtual networks, it has been shown that it is possible to obtain more free resources as well as having the physical resources better balanced. Finally, it was shown that virtual networks are quite resilient to failures on the physical network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The law regulating the availability of abortion is problematic both legally and morally. It is dogmatic in its requirements of women and doctors and ignorant of would-be fathers. Practically, its usage is liberal - with s1(1)(a) Abortion Act 1967 treated as a ‘catch all’ ground - it allows abortion on demand. Yet this is not reflected in the ‘law’. Against this outdated legislation I propose a model of autonomy which seeks to tether our moral concerns with a new legal approach to abortion. I do so by maintaining that a legal conception of autonomy is derivable from the categorical imperative resulting from Gewirth’s argument to the Principle of Generic Consistency: Act in accordance with the generic rights of your recipients as well as of yourself. This model of Gewirthian Rational Autonomy, I suggest, provides a guide for both public and private notions of autonomy and how our autonomous interests can be balanced across social structures in order to legitimately empower choice. I claim, ultimately, that relevant rights in the context of abortion are derivable from this model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the hardware implementation of a High-Rate MIMO Receiver in an FPGA for three modulations, namely BPSK, QPSK and 16-QAM based on the Alamouti scheme. The implementation with 16-QAM achieves more than 1.6 Gbps with 66% of the resources of a medium-sized Virtex-4 FPGA. This results indicate that the Alamouti scheme is a good design option for hardware implementation of a high-rate MIMO receiver. Also, using an FPGA, the modulation can be dynamically changed on demand.