962 resultados para Conventional matching networks
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.
Resumo:
Theoretical developments on pinning control of complex dynamical networks have mainly focused on the deterministic versions of the model dynamics. However, the dynamical behavior of most real networks is often affected by stochastic noise components. In this paper the pinning control of a stochastic version of the coupled map lattice network with spatiotemporal characteristics is studied. The control of these complex dynamical networks have functional uncertainty which should be considered when calculating stabilizing control signals. Two feedback control methods are considered: the conventional feedback control and modified stochastic feedback control. It is shown that the typically-used conventional control method suffers from the ignorance of model uncertainty leading to a reduction and potentially a collapse in the control efficiency. Numerical verification of the main result is provided for a chaotic coupled map lattice network. © 2011 IEEE.
Resumo:
In this paper we propose a hybrid TCP/UDP transport, specifically for H.264/AVC encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. When implementing the hybrid approach, we argue that the playback at the receiver often need not be 100% perfect, provided that a certain level of quality is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. This allows use of additional features in the H.264/AVC standard which simultaneously provide an enhanced playback quality, in addition to a reduction in throughput. These benefits are demonstrated through experimental results using a test-bed to emulate the hybrid proposal. We compare the proposed system with other protection methods, such as FEC, and in one case show that for the same bandwidth overhead, FEC is unable to match the performance of the hybrid system in terms of playback quality. Furthermore, we measure the delay associated with our approach, and examine its potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone. © 2011 IEEE.
Resumo:
Business angels are natural persons who provide equity financing for young enterprises and gain ownership in them. They are usually anonym investors and they operate in the background of the companies. Their important feature is that over the funding of the enterprises based on their business experiences they can contribute to the success of the companies with their special expertise and with strategic support. As a result of the asymmetric information between the angels and the companies their matching is difficult (Becsky-Nagy – Fazekas 2015), and the fact, that angel investors prefer anonymity makes it harder for entrepreneurs to obtain informal venture capital. The primary aim of the different type of business angel organizations and networks is to alleviate this matching process with intermediation between the two parties. The role of these organizations is increasing in the informal venture capital market compared to the individually operating angels. The recognition of their economic importance led many governments to support them. There were also public initiations that aimed the establishment of these intermediary organizations that led to the institutionalization of business angels. This study via the characterization of business angels focuses on the progress of these informational intermediaries and their ways of development with regards to the international trends and the current situation of Hungarian business angels and angel networks.
Resumo:
The current research considers the capacity of a local organic food system for producer and consumer empowerment and sustainable development outcomes in western Guatemala. Many have argued that the forging of local agricultural networks linking farmers, consumers, and supporting institutions is an effective tool for challenging the negative economic, environmental, and sociopolitical impacts associated with industrial models of global food production. But does this work in the context of agrarian development in the developing world? Despite the fact that there is extensive literature concerning local food system formation in the global north, there remains a paucity of research covering how the principles of local food systems are being integrated into agricultural development projects in developing countries. My work critically examines claims to agricultural sustainability and actor empowerment in a local organic food system built around non-traditional agricultural crops in western Guatemala. Employing a mixed methods research design involving twenty months of participant observation, in-depth interviewing, surveying, and a self-administered questionnaire, the project evaluates the sustainability of this NGO-led development initiative and local food movement along several dimensions. Focusing on the unique economic and social networks of actors and institutions at each stage of the commodity chain, this research shows how the growth of an alternative food system continues to be shaped by context specific processes, politics, and structures of conventional food systems. Further, it shows how the specifics of context also produce new relationships of cooperation and power in the development process. Results indicate that structures surrounding agrarian development in the Guatemalan context give rise to a hybrid form of development that at the same time contests and reinforces conventional models of food production and consumption. Therefore, participation entails a host of compromises and tradeoffs that result in mixed successes and setbacks, as actors attempt to refashion conventional commodity chains through local food system formation.^
Resumo:
Background The HIV virus is known for its ability to exploit numerous genetic and evolutionary mechanisms to ensure its proliferation, among them, high replication, mutation and recombination rates. Sliding MinPD, a recently introduced computational method [1], was used to investigate the patterns of evolution of serially-sampled HIV-1 sequence data from eight patients with a special focus on the emergence of X4 strains. Unlike other phylogenetic methods, Sliding MinPD combines distance-based inference with a nonparametric bootstrap procedure and automated recombination detection to reconstruct the evolutionary history of longitudinal sequence data. We present serial evolutionary networks as a longitudinal representation of the mutational pathways of a viral population in a within-host environment. The longitudinal representation of the evolutionary networks was complemented with charts of clinical markers to facilitate correlation analysis between pertinent clinical information and the evolutionary relationships. Results Analysis based on the predicted networks suggests the following:: significantly stronger recombination signals (p = 0.003) for the inferred ancestors of the X4 strains, recombination events between different lineages and recombination events between putative reservoir virus and those from a later population, an early star-like topology observed for four of the patients who died of AIDS. A significantly higher number of recombinants were predicted at sampling points that corresponded to peaks in the viral load levels (p = 0.0042). Conclusion Our results indicate that serial evolutionary networks of HIV sequences enable systematic statistical analysis of the implicit relations embedded in the topology of the structure and can greatly facilitate identification of patterns of evolution that can lead to specific hypotheses and new insights. The conclusions of applying our method to empirical HIV data support the conventional wisdom of the new generation HIV treatments, that in order to keep the virus in check, viral loads need to be suppressed to almost undetectable levels.
Resumo:
Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.
The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.
This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.
Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.
The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.
Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.
Resumo:
This dissertation studies capacity investments in energy sources, with a focus on renewable technologies, such as solar and wind energy. We develop analytical models to provide insights for policymakers and use real data from the state of Texas to corroborate our findings.
We first take a strategic perspective and focus on electricity pricing policies. Specifically, we investigate the capacity investments of a utility firm in renewable and conventional energy sources under flat and peak pricing policies. We consider generation patterns and intermittency of solar and wind energy in relation to the electricity demand throughout a day. We find that flat pricing leads to a higher investment level for solar energy and it can still lead to more investments in wind energy if considerable amount of wind energy is generated throughout the day.
In the second essay, we complement the first one by focusing on the problem of matching supply with demand in every operating period (e.g., every five minutes) from the perspective of a utility firm. We study the interaction between renewable and conventional sources with different levels of operational flexibility, i.e., the possibility
of quickly ramping energy output up or down. We show that operational flexibility determines these interactions: renewable and inflexible sources (e.g., nuclear energy) are substitutes, whereas renewable and flexible sources (e.g., natural gas) are complements.
In the final essay, rather than the capacity investments of the utility firms, we focus on the capacity investments of households in rooftop solar panels. We investigate whether or not these investments may cause a utility death spiral effect, which is a vicious circle of increased solar adoption and higher electricity prices. We observe that the current rate-of-return regulation may lead to a death spiral for utility firms. We show that one way to reverse the spiral effect is to allow the utility firms to maximize their profits by determining electricity prices.
Resumo:
With the popularization of GPS-enabled devices such as mobile phones, location data are becoming available at an unprecedented scale. The locations may be collected from many different sources such as vehicles moving around a city, user check-ins in social networks, and geo-tagged micro-blogging photos or messages. Besides the longitude and latitude, each location record may also have a timestamp and additional information such as the name of the location. Time-ordered sequences of these locations form trajectories, which together contain useful high-level information about people's movement patterns.
The first part of this thesis focuses on a few geometric problems motivated by the matching and clustering of trajectories. We first give a new algorithm for computing a matching between a pair of curves under existing models such as dynamic time warping (DTW). The algorithm is more efficient than standard dynamic programming algorithms both theoretically and practically. We then propose a new matching model for trajectories that avoids the drawbacks of existing models. For trajectory clustering, we present an algorithm that computes clusters of subtrajectories, which correspond to common movement patterns. We also consider trajectories of check-ins, and propose a statistical generative model, which identifies check-in clusters as well as the transition patterns between the clusters.
The second part of the thesis considers the problem of covering shortest paths in a road network, motivated by an EV charging station placement problem. More specifically, a subset of vertices in the road network are selected to place charging stations so that every shortest path contains enough charging stations and can be traveled by an EV without draining the battery. We first introduce a general technique for the geometric set cover problem. This technique leads to near-linear-time approximation algorithms, which are the state-of-the-art algorithms for this problem in either running time or approximation ratio. We then use this technique to develop a near-linear-time algorithm for this
shortest-path cover problem.
Resumo:
Community networks are IP-based computer networks that are operated by a community as a common good. In Europe, the most well-known community networks are Guifi in Catalonia, Freifunk in Berlin, Ninux in Italy, Funkfeuer in Vienna and the Athens Wireless Metropolitan Network in Greece. This paper deals with community networks as alternative forms of Internet access and alternative infrastructures and asks: What does sustainability and unsustainability mean in the context of community networks? What advantages do such networks have over conventional forms of Internet access and infrastructure provided by large telecommunications corporations? In addition what disadvantages do they face at the same time? This article provides a framework for thinking dialectically about the un/sustainability of community networks. It provides a framework of practical questions that can be asked when assessing power structures in the context of Internet infrastructures and access. It presents an overview of environmental, economic, political and cultural contradictions that community networks may face as well as a typology of questions that can be asked in order to identify such contradictions.
Resumo:
People recommenders are a widespread feature of social networking sites and educational social learning platforms alike. However, when these systems are used to extend learners’ Personal Learning Networks, they often fall short of providing recommendations of learning value to their users. This paper proposes a design of a people recommender based on content-based user profiles, and a matching method based on dissimilarity therein. It presents the results of an experiment conducted with curators of the content curation site Scoop.it!, where curators rated personalized recommendations for contacts. The study showed that matching dissimilarity of interpretations of shared interests is more successful in providing positive experiences of breakdown for the curator than is matching on similarity. The main conclusion of this paper is that people recommenders should aim to trigger constructive experiences of breakdown for their users, as the prospect and potential of such experiences encourage learners to connect to their recommended peers.
Resumo:
Wind generation in highly interconnected power networks creates local and centralised stability issues based on their proximity to conventional synchronous generators and load centres. This paper examines the large disturbance stability issues (i.e. rotor angle and voltage stability) in power networks with geographically distributed wind resources in the context of a number of dispatch scenarios based on profiles of historical wind generation for a real power network. Stability issues have been analysed using novel stability indices developed from dynamic characteristics of wind generation. The results of this study show that localised stability issues worsen when significant penetration of both conventional and wind generation is present due to their non-complementary characteristics. In contrast, network stability improves when either high penetration of wind and synchronous generation is present in the network. Therefore, network regions can be clustered into two distinct stability groups (i.e. superior stability and inferior stability regions). Network stability improves when a voltage control strategy is implemented at wind farms, however both stability clusters remain unchanged irrespective of change in the control strategy. Moreover, this study has shown that the enhanced fault ride-through (FRT) strategy for wind farms can improve both voltage and rotor angle stability locally, but only a marginal improvement is evident in neighbouring regions.