848 resultados para unknown audience
Resumo:
Digital Image
Resumo:
Digital Image
Resumo:
Postcard written to Therese Gottschalk nee Molling at Norderney
Resumo:
Digital Image
Resumo:
Digital Image
Resumo:
Digital Image
Resumo:
This paper presents a constructive solution to the problem of designing a reduced-order Luenberger observer for linear systems subject to arbitrary unknown inputs.
Resumo:
The absorption produced by the audience in concert halls is considered a random variable. Beranek's proposal [L. L. Beranek, Music, Acoustics and Architecture (Wiley, New York, 1962), p. 543] that audience absorption is proportional to the area they occupy and not to their number is subjected to a statistical hypothesis test. A two variable linear regression model of the absorption with audience area and residual area as regressor variables is postulated for concert halls without added absorptive materials. Since Beranek's contention amounts to the statement that audience absorption is independent of the seating density, the test of the hypothesis lies in categorizing halls by seating density and examining for significant differences among slopes of regression planes of the different categories. Such a test shows that Beranek's hypothesis can be accepted. It is also shown that the audience area is a better predictor of the absorption than the audience number. The absorption coefficients and their 95% confidence limits are given for the audience and residual areas. A critique of the regression model is presented.
Resumo:
This study investigates the process of producing interactivity in a converged media environment. The study asks whether more media convergence equals more interactivity. The research object is approached through semi-structured interviews of prominent decision makers within the Finnish media. The main focus of the study are the three big ones of the traditional media, radio, television and the printing press, and their ability to adapt to the changing environment. The study develops theoretical models for the analysis of interactive features and convergence. Case-studies are formed from the interview data and they are evaluated against the models. As a result the cases arc plotted and compared on a four-fold table. The cases are Radio Rock, NRJ, Biu Brother, Television Chat, Olivia and Sanoma News. It is found out that the theoretical models can accurately forecast the results of the case studies. The models are also able to distinguish different aspects of both interactivity and convergence so that a case, which at a first glance seems not to be very interactive is in the end found out to receive second highest scores on the analysis. The highest scores are received by Big Brother and Sanoma News. Through the theory and the analysis of the research data it is found out that the concepts of interactivity and convergence arc intimately intertwined and very hard in many cases to separate from each other. Hence the answer to the main question of this study is yes, convergence does promote interactivity and audience participation. The main theoretical background for the analysis of interactivity follows the work of Came Fleeter, Spiro Kiousis and Sally McMillan. Heeler's six-dimensional definition of interactivity is used as the basis for operationalizing interactivity. The actor-network theory is used as the main theoretical framework to analyze convergence. The definition and operationalization of the actor-network theory into a model of convergence follows the work of Michel Callon. Bruno Latour and especially John Law and Felix Stalder.
Resumo:
We present a frontier based algorithm for searching multiple goals in a fully unknown environment, with only information about the regions where the goals are most likely to be located. Our algorithm chooses an ``active goal'' from the ``active goal list'' generated by running a Traveling Salesman Problem (Tsp) routine with the given centroid locations of the goal regions. We use the concept of ``goal switching'' which helps not only in reaching more number of goals in given time, but also prevents unnecessary search around the goals that are not accessible (surrounded by walls). The simulation study shows that our algorithm outperforms Multi-Heuristic LRTA* (MELRTA*) which is a significant representative of multiple goal search approaches in an unknown environment, especially in environments with wall like obstacles.
Resumo:
We consider the problem of goal seeking by robots in unknown environments. We present a frontier based algorithm for finding a route to a goal in a fully unknown environment, where information about the goal region (GR), the region where the goal is most likely to be located, is available. Our algorithm efficiently chooses the best candidate frontier cell, which is on the boundary between explored space and unexplored space, having the maximum ``goal seeking index'', to reach the goal in minimal number of moves. Modification of the algorithm is also proposed to further reduce the number of moves toward the goal. The algorithm has been tested extensively in simulation runs and results demonstrate that the algorithm effectively directs the robot to the goal and completes the search task in minimal number of moves in bounded as well as unbounded environments. The algorithm is shown to perform as well as a state of the art agent centered search algorithm RTAA*, in cluttered environments if exact location of the goal is known at the beginning of the mission and is shown to perform better in uncluttered environments.
Resumo:
Berge's elegant dipath partition conjecture from 1982 states that in a dipath partition P of the vertex set of a digraph minimizing , there exists a collection Ck of k disjoint independent sets, where each dipath P?P meets exactly min{|P|, k} of the independent sets in C. This conjecture extends Linial's conjecture, the GreeneKleitman Theorem and Dilworth's Theorem for all digraphs. The conjecture is known to be true for acyclic digraphs. For general digraphs, it is known for k=1 by the GallaiMilgram Theorem, for k?? (where ?is the number of vertices in the longest dipath in the graph), by the GallaiRoy Theorem, and when the optimal path partition P contains only dipaths P with |P|?k. Recently, it was proved (Eur J Combin (2007)) for k=2. There was no proof that covers all the known cases of Berge's conjecture. In this article, we give an algorithmic proof of a stronger version of the conjecture for acyclic digraphs, using network flows, which covers all the known cases, except the case k=2, and the new, unknown case, of k=?-1 for all digraphs. So far, there has been no proof that unified all these cases. This proof gives hope for finding a proof for all k.
Resumo:
In the design of practical web page classification systems one often encounters a situation in which the labeled training set is created by choosing some examples from each class; but, the class proportions in this set are not the same as those in the test distribution to which the classifier will be actually applied. The problem is made worse when the amount of training data is also small. In this paper we explore and adapt binary SVM methods that make use of unlabeled data from the test distribution, viz., Transductive SVMs (TSVMs) and expectation regularization/constraint (ER/EC) methods to deal with this situation. We empirically show that when the labeled training data is small, TSVM designed using the class ratio tuned by minimizing the loss on the labeled set yields the best performance; its performance is good even when the deviation between the class ratios of the labeled training set and the test set is quite large. When the labeled training data is sufficiently large, an unsupervised Gaussian mixture model can be used to get a very good estimate of the class ratio in the test set; also, when this estimate is used, both TSVM and EC/ER give their best possible performance, with TSVM coming out superior. The ideas in the paper can be easily extended to multi-class SVMs and MaxEnt models.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.