898 resultados para Sources de connaissances
Resumo:
The new paradigm of connectedness and empowerment brought by the interactivity feature of the Web 2.0 has been challenging the traditional centralized performance of mainstream media. The corporation has been able to survive the strong winds by transforming itself into a global multimedia business network embedded in the network society. By establishing networks, e.g. networks of production and distribution, the global multimedia business network has been able to sight potential solutions by opening the doors to innovation in a decentralized and flexible manner. Under this emerging context of re-organization, traditional practices like sourcing need to be re- explained and that is precisely what this thesis attempts to tackle. Based on ICT and on the network society, the study seeks to explain within the Finnish context the particular case of Helsingin Sanomat (HS) and its relations with the youth news agency, Youth Voice Editorial Board (NÄT). In that sense, the study can be regarded as an explanatory embedded single case study, where HS is the principal unit of analysis and NÄT its embedded unit of analysis. The thesis was able to reach explanations through interrelated steps. First, it determined the role of ICT in HS’s sourcing practices. Then it mapped an overview of the HS’s sourcing relations and provided a context in which NÄT was located. And finally, it established conceptualized institutional relational data between HS and NÄT for their posterior measurement through social network analysis. The data set was collected via qualitative interviews addressed to online and offline editors of HS as well as interviews addressed to NÄT’s personnel. The study concluded that ICT’s interactivity and User Generated Content (UGC) are not sourcing tools as such but mechanism used by HS for getting ideas that could turn into potential news stories. However, when it comes to visual communication, some exemptions were found. The lack of official sources amidst the immediacy leads HS to rely on ICT’s interaction and UGC. More than meets the eye, ICT’s input into the sourcing practice may be more noticeable if the interaction and UGC is well organized and coordinated into proper and innovative networks of alternative content collaboration. Currently, HS performs this sourcing practice via two projects that differ, precisely, by the mode they are coordinated. The first project found, Omakaupunki, is coordinated internally by Sanoma Group’s owned media houses HS, Vartti and Metro. The second project found is coordinated externally. The external alternative sourcing network, as it was labeled, consists of three actors, namely HS, NÄT (professionals in charge) and the youth. This network is a balanced and complete triad in which the actors connect themselves in relations of feedback, recognition, creativity and filtering. However, as innovation is approached very reluctantly, this content collaboration is a laboratory of experiments; a ‘COLLABORATORY’.
Resumo:
The use of electroacoustic analogies suggests that a source of acoustical energy (such as an engine, compressor, blower, turbine, loudspeaker, etc.) can be characterized by an acoustic source pressure ps and internal source impedance Zs, analogous to the open-circuit voltage and internal impedance of an electrical source. The present paper shows analytically that the source characteristics evaluated by means of the indirect methods are independent of the loads selected; that is, the evaluated values of ps and Zs are unique, and that the results of the different methods (including the direct method) are identical. In addition, general relations have been derived here for the transfer of source characteristics from one station to another station across one or more acoustical elements, and also for combining several sources into a single equivalent source. Finally, all the conclusions are extended to the case of a uniformly moving medium, incorporating the convective as well as dissipative effects of the mean flow.
Resumo:
An inverse problem for the wave equation is a mathematical formulation of the problem to convert measurements of sound waves to information about the wave speed governing the propagation of the waves. This doctoral thesis extends the theory on the inverse problems for the wave equation in cases with partial measurement data and also considers detection of discontinuous interfaces in the wave speed. A possible application of the theory is obstetric sonography in which ultrasound measurements are transformed into an image of the fetus in its mother's uterus. The wave speed inside the body can not be directly observed but sound waves can be produced outside the body and their echoes from the body can be recorded. The present work contains five research articles. In the first and the fifth articles we show that it is possible to determine the wave speed uniquely by using far apart sound sources and receivers. This extends a previously known result which requires the sound waves to be produced and recorded in the same place. Our result is motivated by a possible application to reflection seismology which seeks to create an image of the Earth s crust from recording of echoes stimulated for example by explosions. For this purpose, the receivers can not typically lie near the powerful sound sources. In the second article we present a sound source that allows us to recover many essential features of the wave speed from the echo produced by the source. Moreover, these features are known to determine the wave speed under certain geometric assumptions. Previously known results permitted the same features to be recovered only by sequential measurement of echoes produced by multiple different sources. The reduced number of measurements could increase the number possible applications of acoustic probing. In the third and fourth articles we develop an acoustic probing method to locate discontinuous interfaces in the wave speed. These interfaces typically correspond to interfaces between different materials and their locations are of interest in many applications. There are many previous approaches to this problem but none of them exploits sound sources varying freely in time. Our use of more variable sources could allow more robust implementation of the probing.
Resumo:
The study attempts a reception-historical analysis of the Maccabean martyrs. The concept of reception has fundamentally to do with the re-use and interpretation of a text within new texts. In a religious tradition, certain elements become re-circulated and thus their reception may reflect the development of that particular tradition. The Maccabean martyrs first appear in 2 Maccabees. In my study, it is the Maccabean martyr figures who count as the received text; the focus is shifted from the interrelations between texts onto how the figures have been exploited in early Christian and Rabbinic sources. I have divided my sources into two categories and my analysis is in two parts. First, I analyze the reception of the Maccabean martyrs within Jewish and Christian historiographical sources, focusing on the role given to them in the depictions of the Maccabean Revolt (Chapter 3). I conclude that, within Jewish historiography, the martyrs are given roles, which vary between ultimate efficacy and marginal position with regard to making a historical difference. In Christian historiographical sources, the martyrs role grows in importance by time: however, it is not before a Christian cult of the Maccabean martyrs has been established, that the Christian historiographies consider them historically effective. After the first part, I move on to analyze the reception in sources, which make use of the Maccabean martyrs as paradigmatic figures (Chapter 4). I have suggested that the martyrs are paradigmatic in the context of martyrdom, persecution and destruction, on one hand, and in a homiletic context, inspiring religious celebration, on the other. I conclude that, as the figures are considered pre-Christian and biblical martyrs, they function well in terms of Christian martyrdom and have contributed to the development of its ideals. Furthermore, the presentation of the martyr figures in Rabbinic sources demonstrates how the notion of Jewish martyrdom arises from experiences of destruction and despair, not so much from heroic confession of faith in the face of persecution. Before the emergence of a Christian cult of the Maccabean martyrs, their identity is derived namely from their biblical position. Later on, in the homiletic context, their Jewish identity is debated and sometimes reconstructed as fundamentally Christian , despite of their Jewish origins. Similar debate about their identity is not found in the Rabbinic versions of their martyrdom and nothing there indicates a mutual debate between early Christians and Jews. A thematic comparison shows that the Rabbinic and Christian cases of reception are non-reliant on each other but also that they link to one another. Especially the scriptural connections, often made to the Maccabean mother, reveal the similarities. The results of the analyses confirm that the early history of Christianity and Rabbinic Judaism share, at least partly, the same religious environment and intertwining traditions, not only during the first century or two but until Late Antiquity and beyond. More likely, the reception of the Maccabean martyrs demonstrates that these religious traditions never ceased to influence one another.
Resumo:
The Shannon cipher system is studied in the context of general sources using a notion of computational secrecy introduced by Merhav and Arikan. Bounds are derived on limiting exponents of guessing moments for general sources. The bounds are shown to be tight for i.i.d., Markov, and unifilar sources, thus recovering some known results. A close relationship between error exponents and correct decoding exponents for fixed rate source compression on the one hand and exponents for guessing moments on the other hand is established.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
Analytical and numerical solutions have been obtained for some moving boundary problems associated with Joule heating and distributed absorption of oxygen in tissues. Several questions have been examined which are concerned with the solutions of classical formulation of sharp melting front model and the classical enthalpy formulation in which solid, liquid and mushy regions are present. Thermal properties and heat sources in the solid and liquid regions have been taken as unequal. The short-time analytical solutions presented here provide useful information. An effective numerical scheme has been proposed which is accurate and simple.
Resumo:
The similar to 2500 km-long Himalaya plate boundary experienced three great earthquakes during the past century, but none of them generated any surface rupture. The segments between the 1905-1934 and the 1897-1950 sources, known as the central and Assam seismic gaps respectively, have long been considered holding potential for future great earthquakes. This paper addresses two issues concerning earthquakes along the Himalaya plate boundary. One, the absence of surface rupture associated with the great earthquakes, vis-a-vis the purported large slip observed from paleoseismological investigations and two, the current understanding of the status of the seismic gaps in the Central Himalaya and Assam, in view of the paleoseismological and historical data being gathered. We suggest that the ruptures of earthquakes nucleating on the basal detachment are likely to be restricted by the crustal ramps and thus generate no surface ruptures, whereas those originating on the faults within the wedges promote upward propagation of rupture and displacement, as observed during the 2005 Kashmir earthquake, that showed a peak offset of 7 m. The occasional reactivation of these thrust systems within the duplex zone may also be responsible for the observed temporal and spatial clustering of earthquakes in the Himalaya. Observations presented in this paper suggest that the last major earthquake in the Central Himalaya occurred during AD 1119-1292, rather than in 1505, as suggested in some previous studies and thus the gap in the plate boundary events is real. As for the Northwestern Himalaya, seismically generated sedimentary features identified in the 1950 source region are generally younger than AD 1400 and evidence for older events is sketchy. The 1897 Shillong earthquake is not a decollement event and its predecessor is probably similar to 1000 years old. Compared to the Central Himalaya, the Assam Gap is a corridor of low seismicity between two tectonically independent seismogenic source zones that cannot be considered as a seismic gap in the conventional sense. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We present multifrequency Very Large Array (VLA) observations of two giant quasars, 0437-244 and 1025-229, from the Molonglo Complete Sample. These sources have well-defined FR II radio structure, possible one-sided jets, no significant depolarization between 1365 and 4935 MHz and low rotation measure (\ RM \ < 20 rad m(-2)). The giant sources are defined to be those with overall projected size greater than or equal to 1 Mpc. We have compiled a sample of about 50 known giant radio sources from the literature, and have compared some of their properties with a complete sample of 3CR radio sources of smaller sizes to investigate the evolution of giant sources, and test their consistency with the unified scheme for radio galaxies and quasars. We find an inverse correlation between the degree of core prominence and total radio luminosity, and show that the giant radio sources have similar core strengths to smaller sources of similar total luminosity. Hence their large sizes are unlikely to be caused by stronger nuclear activity. The degree of collinearity of the giant sources is also similar to that of the sample of smaller sources. The luminosity-size diagram shows that the giant sources are less luminous than our sample of smaller sized 3CR sources, consistent with evolutionary scenarios in which the giants have evolved from the smaller sources, losing energy as they expand to these large dimensions. For the smaller sources, radiative losses resulting from synchrotron radiation are more significant while for the giant sources the equipartition magnetic fields are smaller and inverse Compton lass owing to microwave background radiation is the dominant process. The radio properties of the giant radio galaxies and quasars are consistent with the unified scheme.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
Road transportation, as an important requirement of modern society, is presently hindered by restrictions in emission legislations as well as the availability of petroleum fuels, and as a consequence, the fuel cost. For nearly 270 years, we burned our fossil cache and have come to within a generation of exhausting the liquid part of it. Besides, to reduce the greenhouse gases, and to obey the environmental laws of most countries, it would be necessary to replace a significant number of the petroleum-fueled internal-combustion-engine vehicles (ICEVs) with electric cars in the near future. In this article, we briefly describe the merits and demerits of various proposed electrochemical systems for electric cars, namely the storage batteries, fuel cells and electrochemical supercapacitors, and determine the power and energy requirements of a modern car. We conclude that a viable electric car could be operated with a 50 kW polymer-electrolyte fuel cell stack to provide power for cruising and climbing, coupled in parallel with a 30 kW supercapacitor and/or battery bank to deliver additional short-term burst-power during acceleration.
Resumo:
The specified range of free chlorine residual (between minimum and maximum) in water distribution systems needs to be maintained to avoid deterioration of the microbial quality of water, control taste and/or odor problems, and hinder formation of carcino-genic disinfection by-products. Multiple water quality sources for providing chlorine input are needed to maintain the chlorine residuals within a specified range throughout the distribution system. The determination of source dosage (i.e., chlorine concentrations/chlorine mass rates) at water quality sources to satisfy the above objective under dynamic conditions is a complex process. A nonlinear optimization problem is formulated to determine the chlorine dosage at the water quality sources subjected to minimum and maximum constraints on chlorine concentrations at all monitoring nodes. A genetic algorithm (GA) approach in which decision variables (chlorine dosage) are coded as binary strings is used to solve this highly nonlinear optimization problem, with nonlinearities arising due to set-point sources and non-first-order reactions. Application of the model is illustrated using three sample water distribution systems, and it indicates that the GA,is a useful tool for evaluating optimal water quality source chlorine schedules.