846 resultados para cloud TV
Resumo:
This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment. The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided. The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN --- the Community Seismic Network --- which uses relatively low-cost sensors deployed by members of the community, and (2) SAF --- the Situation Awareness Framework --- which integrates data from multiple sources, including the CSN, CISN --- the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California --- and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.
Resumo:
This thesis focuses on improving the simulation skills and the theoretical understanding of the subtropical low cloud response to climate change.
First, an energetically consistent forcing framework is designed and implemented for the large eddy simulation (LES) of the low-cloud response to climate change. The three representative current-day subtropical low cloud regimes of cumulus (Cu), cumulus-over-stratocumulus, and stratocumulus (Sc) are all well simulated with this framework, and results are comparable to the conventional fixed-SST approach. However, the cumulus response to climate warming subject to energetic constraints differs significantly from the conventional approach with fixed SST. Under the energetic constraint, the subtropics warm less than the tropics, since longwave (LW) cooling is more efficient with the drier subtropical free troposphere. The surface latent heat flux (LHF) also increases only weakly subject to the surface energetic constraint. Both factors contribute to an increased estimated inversion strength (EIS), and decreased inversion height. The decreased Cu-depth contributes to a decrease of liquid water path (LWP) and weak positive cloud feedback. The conventional fixed-SST approach instead simulates a strong increase in LHF and deepening of the Cu layer, leading to a weakly negative cloud feedback. This illustrates the importance of energetic constraints to the simulation and understanding of the sign and magnitude of low-cloud feedback.
Second, an extended eddy-diffusivity mass-flux (EDMF) closure for the unified representation of sub-grid scale (SGS) turbulence and convection processes in general circulation models (GCM) is presented. The inclusion of prognostic terms and the elimination of the infinitesimal updraft fraction assumption makes it more flexible for implementation in models across different scales. This framework can be consistently extended to formulate multiple updrafts and downdrafts, as well as variances and covariances. It has been verified with LES in different boundary layer regimes in the current climate, and further development and implementation of this closure may help to improve our simulation skills and understanding of low-cloud feedback through GCMs.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
TV Maxambomba: Processos de Singularização é o resultado do processo de investigação sobre as potencialidades que residem na linguagem audiovisual, sobretudo no processo de produção de vídeo e comunicação popular, apropriados por pessoas que nas suas diferenças utilizam a linguagem e a tecnologia do vídeo como ferramenta de produção da expressão da sua cultura, da sua realidade, da sua criação e inventividade. Percorrendo o percurso da TV Maxambomba, essa pesquisa trouxe a dimensão da potencia que envolve a articulação de pessoas e grupos utilizando a tecnologia do audiovisual, a linguagem do vídeo no seu processo de criação como mecanismo de produção de conhecimento e de subjetivação. Ao longo dos seus 15 anos A TV Maxambomba revela-se como um potencial laboratório de invenção midiática, democratizando a linguagem audiovisual, possibilitando que numa era midiática, inicia-se a era pós-mídia. Transgredindo as normas e os formatos televisivos, traçando suas linhas de fuga, trazendo as peculiaridades das comunidades e territórios ocupados pela TV Maxambomba, territorializando e desterritolizando sua própria linguagem, revela-se como espaço de produção de processos de singularização.
Resumo:
Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: To ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. Copyright: © 2015 Bildosola et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Resumo:
Non-Conventional ways of advertising TV Networks and advertisers have come up with in order to tackle proliferation of the media and discretion of the viewer from the TV experience.
Resumo:
Algumas situações relativas às práticas de comunicacão chamam a atenção, seja pelas possibilidades técnicas que surgem a cada nova ferramenta criada, seja pela influência inconteste para as relações humanas e sociais. Em um momento em que a comunidade acadêmica é convocada a pensar uma nova televisão, este trabalho se propõe a entrar em campo para observar o que há disponível: usos, formatos e linguagens em fase de transição. Propomos, portanto, uma análise empírica ambientada em espaço de recepção midiática, e utilizamos como ferramenta metodológica o Princípio da Simetria com base na Teoria Ator-rede. Seguindo a ideia de rastrear conexões a partir de uma escrita etnográfica, o método propõe pensar o social menos como categoria de base analítica - posto antecipadamente e desvinculado do campo das ações - e mais como algo focado em processo contínuo. As práticas so-ciais e midiáticas são pensadas aqui para além dos limites humanos - ou exclusivamente técnicos. O ator-rede se estabelece não como uma entidade fixa, mas através de fluxos a partir dos quais o método ajudará a descrever a propagação das associações. A discussão sobre possibilidades técnicas e produção de conteúdo mal começou. Talvez por isso se encontre vivendo o momento de transcender o campo dos conceitos normativos e das especulações, para assim acompanhar as instâncias midiáticas como redes de actantes inventadas e re-inventadas a cada dia, nas condições de possibilidades dos momentos práticos de uso. Coube à pesquisa em questão a observação mais detida desses procedimentos e da relação entre tecnologias em transformação, audiências, materialidades, espaços, corpos, sensações e emoções, para que assim seja possível identificar algum entendimento sobre o que poderá caracterizar a televisão, pelo menos provisoriamente
Resumo:
A classificação indicativa da programação televisiva gerou recentemente um grande debate no Brasil. Através de uma pesquisa em jornais de janeiro de 2007 a abril de 2008, esta dissertação pretende apresentar e entender as diversas posições, o embate ideológico em torno de ideias como liberdade e democracia, e quais os interesses por trás desta disputa. Enquanto representantes de movimentos sociais e organizações não governamentais em defesa dos direitos da criança e do direito à comunicação se empenhavam na regulamentação, o empresariado da comunicação apresentação forte resistência.
Resumo:
Este estudo faz uma análise dos debates presidenciais na televisão como eventos persuasivos de campanha. O objetivo foi contribuir para a compreensão não só do papel dessa fonte de informação política no contexto brasileiro, mas discutir também de maneira sistemática os seus possíveis efeitos. Os debates na TV são uma variável comunicacional de curto prazo dos processos eleitorais. Eles oferecem estímulos comunicacionais que são disseminados no ambiente da campanha, seja por quem o assiste diretamente, seja por quem fica sabendo desses eventos e dos desempenhos dos candidatos através de outros dispositivos, como a imprensa e o Horário da Propaganda Gratuita Eleitoral (HPGE). Como apenas informação não basta para explicar mudanças de opinião, focamos o estudo em dois eixos principais. O primeiro deles na identificação e no mapeamento das estratégias persuasivas adotadas pelos candidatos, porque eles são instados a confrontar seus adversários, num evento ao vivo, e por meio do qual os eleitores podem avaliar não só o seu posicionamento político, como a maneira que se apresentam. Está presente, neste caso, um impacto sobre a atitude dos eleitores com relação aos competidores. Os principais resultados indicam haver um padrão no objetivo das mensagens, prevalecendo, no agregado, o ataque entre os candidatos da oposição, e a aclamação entre os candidatos da situação. O posicionamento do candidato, bem como o conteúdo político das mensagens apresentaram resultados significativos para um possível efeito sobre a atitude dos eleitores. No estudo, propomos ainda a análise dos enquadramentos adotados pelos competidores, cuja função é estabelecer um quadro de referência para a audiência. Esta variável, que procura levar em conta aspectos da comunicação verbal e nãoverbal, também apresentou resultados significativos. No segundo eixo analítico, tratamos dos efeitos agregados desses acontecimentos de campanha. Foram analisados os debates de 2002, quando prevalecia um clima de opinião favorável à oposição, e 2010, quando o clima é favorável à situação. Com relação ao impacto dos debates no ambiente informacional, os dados sugerem que, em 2002, a atuação de Luiz Inácio Lula da Silva (PT), candidato da oposição, levou a uma ampliação da cobertura jornalística positiva sobre o candidato; enquanto houve um declínio dessa cobertura para José Serra (PSDB), candidato da situação. Em 2010, na cobertura da imprensa após os debates, tanto a candidata da situação, Dilma Rousseff (PT), quanto o da oposição, José Serra, apresentaram equilíbrio. O impacto no ambiente informacional da campanha foi acompanhado de um aumento da intenção de voto agregada para os candidatos que lideravam as pesquisas e que representavam a mudança em 2002, no caso Lula, ou a continuidade em 2010, no caso Dilma. Nas duas eleições, portanto, os debates na TV no Brasil indicaram ser eventos persuasivos importantes, apesar de terem um papel menos central como dispositivo de informação eleitoral e de não levarem à troca de posição entre os competidores nas pesquisas opinião. Mas eles contribuem, ao menos indiretamente, para consolidar e ampliar intenções de voto dos primeiros colocados a partir de uma percepção positiva disseminada sobre os seus desempenhos.
Resumo:
Cloud chambers were essential devices in early nuclear and particle physics research. Superseded by more modern detectors in actual research, they still remain very interesting pedagogical apparatus. This thesis attempts to give a global view on this topic. To do so, a review of the physical foundations of the diffusion cloud chamber, in which an alcohol is supersaturated by cooling it with a thermal reservoir, is carried out. Its main results are then applied to analyse the working conditions inside the chamber. The analysis remarks the importance of using an appropriate alcohol, such as isopropanol, as well as a strong cooling system, which for isopropanol needs to reach −40ºC. That theoretical study is complemented with experimental tests that were performed with what is the usual design of a home-made cloud chamber. An effective setup is established, which highlights details such as a grazing illumination, a direct contact with the cooling reservoir through a wide metal plate, or the importance of avoiding vapour removal. Apart from that, video results of different phenomena that cloud chamber allow to observe are also presented. Overall, it is aimed to present a physical insight that pedagogical papers usually lack.
Resumo:
The surge of the Internet traffic with exabytes of data flowing over operators mobile networks has created the need to rethink the paradigms behind the design of the mobile network architecture. The inadequacy of the 4G UMTS Long term Evolution (LTE) and even of its advanced version LTE-A is evident, considering that the traffic will be extremely heterogeneous in the near future and ranging from 4K resolution TV to machine-type communications. To keep up with these changes, academia, industries and EU institutions have now engaged in the quest for new 5G technology. In this paper we present the innovative system design, concepts and visions developed by the 5G PPP H2020 project SESAME (Small cEllS coordinAtion for Multi-tenancy and Edge services). The innovation of SESAME is manifold: i) combine the key 5G small cells with cloud technology, ii) promote and develop the concept of Small Cellsas- a-Service (SCaaS), iii) bring computing and storage power at the mobile network edge through the development of nonx86 ARM technology enabled micro-servers, and iv) address a large number of scenarios and use cases applying mobile edge computing. Topics: