970 resultados para Internet hosting services


Relevância:

80.00% 80.00%

Publicador:

Resumo:

1974 was the year when the Swedish pop group ABBA won the Eurovision Song Contest in Brighton and when Blue Swede reached number one on the Billboard Hot 100 in the US. Although Swedish pop music gained some international success even prior to 1974, this year is often considered as the beginning of an era in which Swedish pop music had great success around the world. With brands such as ABBA, Europe, Roxette, The Cardigans, Ace of Base, In Flames, Robyn, Avicii, Swedish House Mafia and music producers Stig Andersson, Ola Håkansson, Dag Volle, Max Martin, Andreas Carlsson, Jorgen Elofsson and several others have the myth of the Swedish music miracle kept alive for nearly more than four decades. Swedish music looks to continue reap success around the world, but since the millennium, Sweden's relationship with music has been more focused on relatively controversial Internet-based services for music distribution developed by Swedish entrepreneurs and engineers rather than on successful musicians and composers. This chapter focusses on the music industry in Sweden. The chapter will discuss the development of the Internet services mentioned above and their impact on the production, distribution and consumption of recorded music. Ample space will be given in particular to Spotify, the music service that quickly has fundamentally changed the music industry in Sweden. The chapter will also present how the music industry's three sectors - recorded music, music licensing and live music - interact and evolve in Sweden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Service mismatches involve the adaptation of structural and behavioural interfaces of services, which in practice incurs long lead times through manual, coding e ort. We propose a framework, complementary to conventional service adaptation, to extract comprehensive and seman- tically normalised service interfaces, useful for interoperability in large business networks and the Internet of Services. The framework supports introspection and analysis of large and overloaded operational signa- tures to derive focal artefacts, namely the underlying business objects of services. A more simpli ed and comprehensive service interface layer is created based on these, and rendered into semantically normalised in- terfaces, given an ontology accrued through the framework from service analysis history. This opens up the prospect of supporting capability comparisons across services, and run-time request backtracking and ad- justment, as consumers discover new features of a service's operations through corresponding features of similar services. This paper provides a rst exposition of the service interface synthesis framework, describing patterns having novel requirements for unilateral service adaptation, and algorithms for interface introspection and business object alignment. A prototype implementation and analysis of web services drawn from com- mercial logistic systems are used to validate the algorithms and identify open challenges and future research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless technologies are continuously evolving. Second generation cellular networks have gained worldwide acceptance. Wireless LANs are commonly deployed in corporations or university campuses, and their diffusion in public hotspots is growing. Third generation cellular systems are yet to affirm everywhere; still, there is an impressive amount of research ongoing for deploying beyond 3G systems. These new wireless technologies combine the characteristics of WLAN based and cellular networks to provide increased bandwidth. The common direction where all the efforts in wireless technologies are headed is towards an IP-based communication. Telephony services have been the killer application for cellular systems; their evolution to packet-switched networks is a natural path. Effective IP telephony signaling protocols, such as the Session Initiation Protocol (SIP) and the H 323 protocol are needed to establish IP-based telephony sessions. However, IP telephony is just one service example of IP-based communication. IP-based multimedia sessions are expected to become popular and offer a wider range of communication capabilities than pure telephony. In order to conjoin the advances of the future wireless technologies with the potential of IP-based multimedia communication, the next step would be to obtain ubiquitous communication capabilities. According to this vision, people must be able to communicate also when no support from an infrastructured network is available, needed or desired. In order to achieve ubiquitous communication, end devices must integrate all the capabilities necessary for IP-based distributed and decentralized communication. Such capabilities are currently missing. For example, it is not possible to utilize native IP telephony signaling protocols in a totally decentralized way. This dissertation presents a solution for deploying the SIP protocol in a decentralized fashion without support of infrastructure servers. The proposed solution is mainly designed to fit the needs of decentralized mobile environments, and can be applied to small scale ad-hoc networks or also bigger networks with hundreds of nodes. A framework allowing discovery of SIP users in ad-hoc networks and the establishment of SIP sessions among them, in a fully distributed and secure way, is described and evaluated. Security support allows ad-hoc users to authenticate the sender of a message, and to verify the integrity of a received message. The distributed session management framework has been extended in order to achieve interoperability with the Internet, and the native Internet applications. With limited extensions to the SIP protocol, we have designed and experimentally validated a SIP gateway allowing SIP signaling between ad-hoc networks with private addressing space and native SIP applications in the Internet. The design is completed by an application level relay that permits instant messaging sessions to be established in heterogeneous environments. The resulting framework constitutes a flexible and effective approach for the pervasive deployment of real time applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prediction of variable bit rate compressed video traffic is critical to dynamic allocation of resources in a network. In this paper, we propose a technique for preprocessing the dataset used for training a video traffic predictor. The technique involves identifying the noisy instances in the data using a fuzzy inference system. We focus on three prediction techniques, namely, linear regression, neural network and support vector regression and analyze their performance on H.264 video traces. Our experimental results reveal that data preprocessing greatly improves the performance of linear regression and neural network, but is not effective on support vector regression.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES] La estrategia de la Banca española con respecto a Internet se ha enmarcado entre la modernización del sector y el miedo a la competencia de terceras partes. En el artículo mostramos como, desde la efervescencia inicial, las entidades se han centrado en construir un negocio rentable parte de una estrategia multicanal. Sin embargo todavía las instituciones españolas no han conseguido aprovechar todas las ventajas que presenta Internet a la hora de facilitar la comercialización de productos financieros.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The relentlessly increasing demand for network bandwidth, driven primarily by Internet-based services such as mobile computing, cloud storage and video-on-demand, calls for more efficient utilization of the available communication spectrum, as that afforded by the resurging DSP-powered coherent optical communications. Encoding information in the phase of the optical carrier, using multilevel phase modulationformats, and employing coherent detection at the receiver allows for enhanced spectral efficiency and thus enables increased network capacity. The distributed feedback semiconductor laser (DFB) has served as the near exclusive light source powering the fiber optic, long-haul network for over 30 years. The transition to coherent communication systems is pushing the DFB laser to the limits of its abilities. This is due to its limited temporal coherence that directly translates into the number of different phases that can be imparted to a single optical pulse and thus to the data capacity. Temporal coherence, most commonly quantified in the spectral linewidth Δν, is limited by phase noise, result of quantum-mandated spontaneous emission of photons due to random recombination of carriers in the active region of the laser.

In this work we develop a generically new type of semiconductor laser with the requisite coherence properties. We demonstrate electrically driven lasers characterized by a quantum noise-limited spectral linewidth as low as 18 kHz. This narrow linewidth is result of a fundamentally new laser design philosophy that separates the functions of photon generation and storage and is enabled by a hybrid Si/III-V integration platform. Photons generated in the active region of the III-V material are readily stored away in the low loss Si that hosts the bulk of the laser field, thereby enabling high-Q photon storage. The storage of a large number of coherent quanta acts as an optical flywheel, which by its inertia reduces the effect of the spontaneous emission-mandated phase perturbations on the laser field, while the enhanced photon lifetime effectively reduces the emission rate of incoherent quanta into the lasing mode. Narrow linewidths are obtained over a wavelength bandwidth spanning the entire optical communication C-band (1530-1575nm) at only a fraction of the input power required by conventional DFB lasers. The results presented in this thesis hold great promise for the large scale integration of lithographically tuned, high-coherence laser arrays for use in coherent communications, that will enable Tb/s-scale data capacities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The better models of e-Gov posit high levels of informational communication between citizen and state. Unfortunately, in one area, that communication has traditionally been poor: that is, access to sources of law. There have been a number of reasons for this, but a primary one has been that law was historically mediated for the citizen by the legal profession. This situation is changing with ever increasing numbers of unrepresented litigants being involved at all levels of national court systems in each and every country as well as a generally higher level of intrusion of legislation into everyday home and business life. There have been attempts to improve access through internet based services, but these have improved communication (‘understanding of law’) to only a limited extent. It may be time, this article suggests, to consider re-engineering legal sources so that they better fit the needs of eGov.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

IPTV is now offered by several operators in Europe, US and Asia using broadcast video over private IP networks that are isolated from Internet. IPTV services rely ontransmission of live (real-time) video and/or stored video. Video on Demand (VoD)and Time-shifted TV are implemented by IP unicast and Broadcast TV (BTV) and Near video on demand are implemented by IP multicast. IPTV services require QoS guarantees and can tolerate no more than 10-6 packet loss probability, 200 ms delay, and 50 ms jitter. Low delay is essential for satisfactory trick mode performance(pause, resume,fast forward) for VoD, and fast channel change time for BTV. Internet Traffic Engineering (TE) is defined in RFC 3272 and involves both capacity management and traffic management. Capacity management includes capacityplanning, routing control, and resource management. Traffic management includes (1)nodal traffic control functions such as traffic conditioning, queue management, scheduling, and (2) other functions that regulate traffic flow through the network orthat arbitrate access to network resources. An IPTV network architecture includes multiple networks (core network, metronetwork, access network and home network) that connects devices (super head-end, video hub office, video serving office, home gateway, set-top box). Each IP router in the core and metro networks implements some queueing and packet scheduling mechanism at the output link controller. Popular schedulers in IP networks include Priority Queueing (PQ), Class-Based Weighted Fair Queueing (CBWFQ), and Low Latency Queueing (LLQ) which combines PQ and CBWFQ.The thesis analyzes several Packet Scheduling algorithms that can optimize the tradeoff between system capacity and end user performance for the traffic classes. Before in the simulator FIFO,PQ,GPS queueing methods were implemented inside. This thesis aims to implement the LLQ scheduler inside the simulator and to evaluate the performance of these packet schedulers. The simulator is provided by ErnstNordström and Simulator was built in Visual C++ 2008 environmentand tested and analyzed in MatLab 7.0 under windows VISTA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Actualmente existe una gran cantidad de empresas ofreciendo servicios para el análisis de contenido y minería de datos de las redes sociales con el objetivo de realizar análisis de opiniones y gestión de la reputación. Un alto porcentaje de pequeñas y medianas empresas (pymes) ofrecen soluciones específicas a un sector o dominio industrial. Sin embargo, la adquisición de la necesaria tecnología básica para ofrecer tales servicios es demasiado compleja y constituye un sobrecoste demasiado alto para sus limitados recursos. El objetivo del proyecto europeo OpeNER es la reutilización y desarrollo de componentes y recursos para el procesamiento lingüístico que proporcione la tecnología necesaria para su uso industrial y/o académico.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pursuant to Public Act 93-0331, the Illinois Workforce Investment Board is required to submit annual progress reports on the benchmarks established for measuring workforce development in Illinois.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

E-satisfaction as a construct has gained increasing importance in the marketing literature in recent times. The examination of consumer satisfaction in an online context follows the growing consensus that in Internet retailing, as in traditional retailing, consumer satisfaction is not only a critical performance outcome, but also a primary predictor of customer loyalty and thus, the Internet retailer's endurance and success. The current study replicates the initial examination of e-satisfaction within the U.S. by [Szymanski, David M., & Richard T. Hise (2000). E-satisfaction: An initial examination. Journal of Retailing, 76(3), 309–322] among a sample of online consumers drawn from Germany. The replication was extended to two contexts—consumer satisfaction with Internet retail shopping and consumer satisfaction with Internet financial services sites. The results yield rich insights into the validity of extending the measurement and predictors of e-satisfaction to a trans-national context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless-communication technology can be used to improve road safety and to provide Internet access inside vehicles. This paper proposes a cross-layer protocol called coordinated external peer communication (CEPEC) for Internet-access services and peer communications for vehicular networks. We assume that IEEE 802.16 base stations (BS) are installed along highways and that the same air interface is equipped in vehicles. Certain vehicles locating outside of the limited coverage of their nearest BSs can still get access to the Internet via a multihop route to their BSs. For Internet-access services, the objective of CEPEC is to increase the end-to-end throughput while providing a fairness guarantee in bandwidth usage among road segments. To achieve this goal, the road is logically partitioned into segments of equal length. A relaying head is selected in each segment that performs both local-packet collecting and aggregated packets relaying. The simulation results have shown that the proposed CEPEC protocol provides higher throughput with guaranteed fairness in multihop data delivery in vehicular networks when compared with the purely IEEE 802.16-based protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^