452 resultados para Offline
Resumo:
The performance of three urban land surface models, run in offline mode, with their default external parameters, is evaluated for two distinctly different sites in Helsinki: Torni and Kumpula. The former is a dense city centre site with 22% vegetation, while the latter is a suburban site with over 50% vegetation. At both locations the models are compared against sensible and latent heat fluxes measured using the eddy covariance technique, along with snow depth observations. The cold climate experienced by the city causes strong seasonal variations that include snow cover and stable atmospheric conditions. Most of the time the three models are able to account for the differences between the study areas as well as the seasonal and diurnal variability of the energy balance components. However, the performances are not systematic across the modelled components, season and surface type. The net all-wave radiation is well simulated, with the greatest uncertainties related to snowmelt timing, when the fraction of snow cover has a key role, particularly in determining the surface albedo. For the turbulent fluxes, more variation between the models is seen which can partly be explained by the different methods in their calculation and partly by surface parameter values. For the sensible heat flux, simulation of wintertime values was the main problem, which also leads to issues in predicting near-surface stabilities particularly at the dense city centre site. All models have the most difficulties in simulating latent heat flux. This study particularly emphasizes that improvements are needed in the parameterization of anthropogenic heat flux and thermal parameters in winter, snow cover in spring and evapotranspiration in order to improve the surface energy balance modelling in cold climate cities.
Resumo:
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
Resumo:
Temperature, pressure, gas stoichiometry, and residence time were varied to control the yield and product distribution of the palladium-catalyzed aminocarbonylation of aromatic bromides in both a silicon microreactor and a packed-bed tubular reactor. Automation of the system set points and product sampling enabled facile and repeatable reaction analysis with minimal operator supervision. It was observed that the reaction was divided into two temperature regimes. An automated system was used to screen steady-state conditions for offline analysis by gas chromatography to fit a reaction rate model. Additionally, a transient temperature ramp method utilizing online infrared analysis was used, leading to more rapid determination of the reaction activation energy of the lower temperature regimes. The entire reaction spanning both regimes was modeled in good agreement with the experimental data.
Resumo:
I consider the case for genuinely anonymous web searching. Big data seems to have it in for privacy. The story is well known, particularly since the dawn of the web. Vastly more personal information, monumental and quotidian, is gathered than in the pre-digital days. Once gathered it can be aggregated and analyzed to produce rich portraits, which in turn permit unnerving prediction of our future behavior. The new information can then be shared widely, limiting prospects and threatening autonomy. How should we respond? Following Nissenbaum (2011) and Brunton and Nissenbaum (2011 and 2013), I will argue that the proposed solutions—consent, anonymity as conventionally practiced, corporate best practices, and law—fail to protect us against routine surveillance of our online behavior. Brunton and Nissenbaum rightly maintain that, given the power imbalance between data holders and data subjects, obfuscation of one’s online activities is justified. Obfuscation works by generating “misleading, false, or ambiguous data with the intention of confusing an adversary or simply adding to the time or cost of separating good data from bad,” thus decreasing the value of the data collected (Brunton and Nissenbaum, 2011). The phenomenon is as old as the hills. Natural selection evidently blundered upon the tactic long ago. Take a savory butterfly whose markings mimic those of a toxic cousin. From the point of view of a would-be predator the data conveyed by the pattern is ambiguous. Is the bug lunch or potential last meal? In the light of the steep costs of a mistake, the savvy predator goes hungry. Online obfuscation works similarly, attempting for instance to disguise the surfer’s identity (Tor) or the nature of her queries (Howe and Nissenbaum 2009). Yet online obfuscation comes with significant social costs. First, it implies free riding. If I’ve installed an effective obfuscating program, I’m enjoying the benefits of an apparently free internet without paying the costs of surveillance, which are shifted entirely onto non-obfuscators. Second, it permits sketchy actors, from child pornographers to fraudsters, to operate with near impunity. Third, online merchants could plausibly claim that, when we shop online, surveillance is the price we pay for convenience. If we don’t like it, we should take our business to the local brick-and-mortar and pay with cash. Brunton and Nissenbaum have not fully addressed the last two costs. Nevertheless, I think the strict defender of online anonymity can meet these objections. Regarding the third, the future doesn’t bode well for offline shopping. Consider music and books. Intrepid shoppers can still find most of what they want in a book or record store. Soon, though, this will probably not be the case. And then there are those who, for perfectly good reasons, are sensitive about doing some of their shopping in person, perhaps because of their weight or sexual tastes. I argue that consumers should not have to pay the price of surveillance every time they want to buy that catchy new hit, that New York Times bestseller, or a sex toy.
Resumo:
A current topic in Swedish schools is the use of computer games and gaming. One reason is because computers are becoming more and more integrated into the schools, and the technology plays a large role in the everyday lives of the pupils. Since teachers should integrate pupils’ interests in the formal teaching, it is of interest to know what attitudes teachers have towards gaming. Therefore the aim of this empirical study is to gain an insight into the attitudes Swedish primary teachers have towards online and offline computer games in the EFL classroom. An additional aim is to investigate to what extent teachers use games. Five interviews were conducted with teachers in different Swedish schools in a small to medium-sized municipality. After the interviews were transcribed, the results were analyzed and discussed in relation to relevant research and the sociocultural theory. The results show that teachers are positive towards games and gaming, mostly because gaming often contains interaction with others and learning from peers is a main component in sociocultural theory. However, only one out of the five participants had at some point used games. The conclusion is that teachers are unsure about how to use games in their teaching and that training and courses in this area would be valuable. More research is needed within this area, and it would be of value to investigate what suggested courses would contain and also to investigate exactly how games can be used in teaching.
Resumo:
The number of research papers available today is growing at a staggering rate, generating a huge amount of information that people cannot keep up with. According to a tendency indicated by the United States’ National Science Foundation, more than 10 million new papers will be published in the next 20 years. Because most of these papers will be available on the Web, this research focus on exploring issues on recommending research papers to users, in order to directly lead users to papers of their interest. Recommender systems are used to recommend items to users among a huge stream of available items, according to users’ interests. This research focuses on the two most prevalent techniques to date, namely Content-Based Filtering and Collaborative Filtering. The first explores the text of the paper itself, recommending items similar in content to the ones the user has rated in the past. The second explores the citation web existing among papers. As these two techniques have complementary advantages, we explored hybrid approaches to recommending research papers. We created standalone and hybrid versions of algorithms and evaluated them through both offline experiments on a database of 102,295 papers, and an online experiment with 110 users. Our results show that the two techniques can be successfully combined to recommend papers. The coverage is also increased at the level of 100% in the hybrid algorithms. In addition, we found that different algorithms are more suitable for recommending different kinds of papers. Finally, we verified that users’ research experience influences the way users perceive recommendations. In parallel, we found that there are no significant differences in recommending papers for users from different countries. However, our results showed that users’ interacting with a research paper Recommender Systems are much happier when the interface is presented in the user’s native language, regardless the language that the papers are written. Therefore, an interface should be tailored to the user’s mother language.
Resumo:
Clusters de computadores são geralmente utilizados para se obter alto desempenho na execução de aplicações paralelas. Sua utilização tem aumentado significativamente ao longo dos anos e resulta hoje em uma presença de quase 60% entre as 500 máquinas mais rápidas do mundo. Embora a utilização de clusters seja bastante difundida, a tarefa de monitoramento de recursos dessas máquinas é considerada complexa. Essa complexidade advém do fato de existirem diferentes configurações de software e hardware que podem ser caracterizadas como cluster. Diferentes configurações acabam por fazer com que o administrador de um cluster necessite de mais de uma ferramenta de monitoramento para conseguir obter informações suficientes para uma tomada de decisão acerca de eventuais problemas que possam estar acontecendo no seu cluster. Outra situação que demonstra a complexidade da tarefa de monitoramento acontece quando o desenvolvedor de aplicações paralelas necessita de informações relativas ao ambiente de execução da sua aplicação para entender melhor o seu comportamento. A execução de aplicações paralelas em ambientes multi-cluster e grid juntamente com a necessidade de informações externas à aplicação é outra situação que necessita da tarefa de monitoramento. Em todas essas situações, verifica-se a existência de múltiplas fontes de dados independentes e que podem ter informações relacionadas ou complementares. O objetivo deste trabalho é propor um modelo de integração de dados que pode se adaptar a diferentes fontes de informação e gerar como resultado informações integradas que sejam passíveis de uma visualização conjunta por alguma ferramenta. Esse modelo é baseado na depuração offline de aplicações paralelas e é dividido em duas etapas: a coleta de dados e uma posterior integração das informações. Um protótipo baseado nesse modelo de integração é descrito neste trabalho Esse protótipo utiliza como fontes de informação as ferramentas de monitoramento de cluster Ganglia e Performance Co-Pilot, bibliotecas de rastreamento de aplicações DECK e MPI e uma instrumentação do Sistema operacional Linux para registrar as trocas de contexto de um conjunto de processos. Pajé é a ferramenta escolhida para a visualização integrada das informações. Os resultados do processo de integração de dados pelo protótipo apresentado neste trabalho são caracterizados em três tipos: depuração de aplicações DECK, depuração de aplicações MPI e monitoramento de cluster. Ao final do texto, são delineadas algumas conclusões e contribuições desse trabalho, assim como algumas sugestões de trabalhos futuros.
Resumo:
This thesis develops and evaluates a business model for connected full electric vehicles (FEV) for the European market. Despite a promoting political environment, various barriers have thus far prevented the FEV from becoming a mass-market vehicle. Besides cost, the most noteworthy of these barriers is represented by range anxiety, a product of FEVs’ limited range, lacking availability of charging infrastructure, and long recharging times. Connected FEVs, which maintain a constant connection to the surrounding infrastructure, appear to be a promising element to overcome drivers’ range anxiety. Yet their successful application requires a well functioning FEV ecosystem which can only be created through the collaboration of various stakeholders such as original equipment manufacturers (OEM), first tier suppliers (FTS), charging infrastructure and service providers (CISP), utilities, communication enablers, and governments. This thesis explores and evaluates how a business model, jointly created by these stakeholders, could look like, i.e. how stakeholders could collaborate in the design of products, services, infrastructure, and advanced mobility management, to meet drivers with a sensible value proposition that is at least equivalent to that of internal combustion engine (ICE) cars. It suggests that this value proposition will be an end-2-end package provided by CISPs or OEMs that comprises mobility packages (incl. pay per mile plans, battery leasing, charging and battery swapping (BS) infrastructure) and FEVs equipped with an on-board unit (OBU) combined with additional services targeted at range anxiety reduction. From a theoretical point of view the thesis answers the question which business model framework is suitable for the development of a holistic, i.e. all stakeholder-comprising business model for connected FEVs and defines such a business model. In doing so the thesis provides the first comprehensive business model related research findings on connected FEVs, as prior works focused on the much less complex scenario featuring only “offline” FEVs.
Resumo:
Os impactos das variações climáticas tem sido um tema amplamente pesquisado na macroeconomia mundial e também em setores como agricultura, energia e seguros. Já para o setor de varejo, uma busca nos principais periódicos brasileiros não retornou nenhum estudo específico. Em economias mais desenvolvidas produtos de seguros atrelados ao clima são amplamente negociados e através deste trabalho visamos também avaliar a possibilidade de desenvolvimento deste mercado no Brasil. O presente trabalho buscou avaliar os impactos das variações climáticas nas vendas do varejo durante período de aproximadamente 18 meses (564 dias) para 253 cidades brasileiras. As informações de variações climáticas (precipitação, temperatura, velocidade do vento, umidade relativa, insolação e pressão atmosférica) foram obtidas através do INMET (Instituto Nacional de Meteorologia) e cruzadas com as informações transacionais de até 206 mil clientes ativos de uma amostra não balanceada, oriundos de uma instituição financeira do ramo de cartões de crédito. Ambas as bases possuem periodicidade diária. A metodologia utilizada para o modelo econométrico foram os dados de painel com efeito fixo para avaliação de dados longitudinais através dos softwares de estatística / econometria EViews (software proprietário da IHS) e R (software livre). A hipótese nula testada foi de que o clima influencia nas decisões de compra dos clientes no curto prazo, hipótese esta provada pelas análises realizadas. Assumindo que o comportamento do consumidor do varejo não muda devido à seleção do meio de pagamento, ao chover as vendas do varejo em moeda local são impactadas negativamente. A explicação está na redução da quantidade total de transações e não o valor médio das transações. Ao excluir da base as cidades de São Paulo e Rio de Janeiro não houve alteração na significância e relevância dos resultados. Por outro lado, a chuva possui efeito de substituição entre as vendas online e offline. Quando analisado setores econômicos para observar se há comportamento diferenciado entre consumo e compras não observou-se alteração nos resultados. Ao incluirmos variáveis demográficas, concluímos que as mulheres e pessoas com maior faixa de idade apresentam maior histórico de compras. Ao avaliar o impacto da chuva em um determinado dia e seu impacto nos próximos 6 à 29 dias observamos que é significante para a quantidade de transações porém o impacto no volume de vendas não foi significante.
Resumo:
O varejo online vem apresentando volumes cada vez maiores em vendas. Esse volume de negócios em crescimento, transacionados em ambiente virtual, apresenta maior desafio à fidelização do seu consumo em relação aos apresentados nas transações realizadas em lojas físicas. No Brasil, os propulsores do crescimento do varejo online estão, principalmente, entre os consumidores pertencentes à baixa renda, mais conhecida como classe C, mas pouco se conhece a respeito desse segmento no varejo online. Essa pesquisa se propôs a avaliar o fenômeno de e-Lealdade, com consumidores brasileiros de ambos os estratos de renda (superior e inferior), em duas etapas: qualitativa exploratória e quantitativa. Na etapa exploratória da pesquisa foram entrevistados 16 consumidores e foi realizada análise de conteúdo. Na etapa quantitativa foram avaliados dados coletados de 1.020 consumidores, via websurvey, e analisados com o emprego de equações estruturais (estimador de máxima verossimilhança). A investigação foi norteada pela identificação, através de revisão de literatura, do que se denominou “modelo Clássico de e-Lealdade”. Nesse modelo, a e-Lealdade é construída pela relação entre as dimensões de e-Qualidade, e-Satisfação e e-Confiança. Essa pesquisa se propôs a avaliar essas relações, no mercado brasileiro, entre diferentes estratos de renda dos consumidores. Esse objetivo foi motivado pelo crescimento da participação de consumidores pertencentes à classe C no comércio on-line brasileiro, além de indícios de comportamento distinto entre os estratos. Entre os principais resultados, verificou-se que consumidores pertencentes a estratos superiores de renda valorizam mais a qualidade da informação do website, ao passo que consumidores de estratos de renda inferiores ressaltam a importância da garantia de entrega. Tanto para consumidores do estrato superior quanto para o inferior, a e-Satisfação tem peso muito maior do que a e-Confiança na determinação da e-Lealdade. Esse destaque da importância da e-Satisfação (em relação à e-Confiança) é ainda mais acentuado no segmento inferior de renda. Apesar do seu menor peso, a relevância da e-Confiança é maior no segmento superior do que no inferior. Tais diferenças atestaram a moderação da variável renda nas relações definidoras de e-Lealdade, esperando ter contribuído para o estudo do fenômeno.
Resumo:
The aim of this Master’s thesis has been to shed light on the response strategies that organizations are implementing when facing a crisis created on or amplified by social media. Since the development of social media in the late 1990s, the interplay between the online and the offline spheres has become more complex, and characterized by dynamics of a new magnitude, as exemplified by the wave of “Twitter” Revolutions or the Wikileaks scandal in the mid 2000s, where online behaviors deeply affected an offline reality. The corporate world does not escape to this worldwide phenomenon, and there are more and more examples of organizational reputations destroyed by social media “fireballs”. As such, this research aims to investigate, through the analysis of six recent cases of corporate crises (2013-2015) from France and Brazil, different strategies currently in use in order to identify examples of good and bad practices for companies to adopt or avoid when facing a social media crisis. The first part of this research is dedicated to a review of the literature on crisis management and social media. From that review, we were able to design a matrix model, the Social Media Crisis Management Matrix, with which we analyzed the response strategies of the six companies we selected. This model allows the conceptualization of social media crises in a multidimensional matrix built to allow the choice, according to four parameters, of the most efficient (that is: which will limit the reputational damage) response strategy. Attribution of responsibility for the crisis to the company by stakeholders, the origin of the crisis (internal or external), the degree of reputational threat, and the emotions conveyed online by stakeholders help companies determining whether to adopt a defensive response, or an accommodative response. The results of the analysis suggest that social media crises are rather manichean objects for they are, unlike their traditional offline counterparts, characterized by emotional involvement and irrationality, and cannot be dealt with traditionally. Thus analyzing the emotions of stakeholders proved to be, in these cases, an accurate thermometer of the seriousness of the crisis, and as such, a better rudder to follow when selecting a response strategy. Consequently, in the cases, companies minimized their reputational damage when responding to their stakeholders in an accommodative way, regardless of the “objective” situation, which might be a change of paradigm in crisis management.
Resumo:
As digital systems move away from traditional desktop setups, new interaction paradigms are emerging that better integrate with users’ realworld surroundings, and better support users’ individual needs. While promising, these modern interaction paradigms also present new challenges, such as a lack of paradigm-specific tools to systematically evaluate and fully understand their use. This dissertation tackles this issue by framing empirical studies of three novel digital systems in embodied cognition – an exciting new perspective in cognitive science where the body and its interactions with the physical world take a central role in human cognition. This is achieved by first, focusing the design of all these systems on a contemporary interaction paradigm that emphasizes physical interaction on tangible interaction, a contemporary interaction paradigm; and second, by comprehensively studying user performance in these systems through a set of novel performance metrics grounded on epistemic actions, a relatively well established and studied construct in the literature on embodied cognition. The first system presented in this dissertation is an augmented Four-in-a-row board game. Three different versions of the game were developed, based on three different interaction paradigms (tangible, touch and mouse), and a repeated measures study involving 36 participants measured the occurrence of three simple epistemic actions across these three interfaces. The results highlight the relevance of epistemic actions in such a task and suggest that the different interaction paradigms afford instantiation of these actions in different ways. Additionally, the tangible version of the system supports the most rapid execution of these actions, providing novel quantitative insights into the real benefits of tangible systems. The second system presented in this dissertation is a tangible tabletop scheduling application. Two studies with single and paired users provide several insights into the impact of epistemic actions on the user experience when these are performed outside of a system’s sensing boundaries. These insights are clustered by the form, size and location of ideal interface areas for such offline epistemic actions to occur, as well as how can physical tokens be designed to better support them. Finally, and based on the results obtained to this point, the last study presented in this dissertation directly addresses the lack of empirical tools to formally evaluate tangible interaction. It presents a video-coding framework grounded on a systematic literature review of 78 papers, and evaluates its value as metric through a 60 participant study performed across three different research laboratories. The results highlight the usefulness and power of epistemic actions as a performance metric for tangible systems. In sum, through the use of such novel metrics in each of the three studies presented, this dissertation provides a better understanding of the real impact and benefits of designing and developing systems that feature tangible interaction.
Resumo:
During sleep, humans experience the offline images and sensations that we call dreams, which are typically emotional and lacking in rational judgment of their bizarreness. However, during lucid dreaming (LD), subjects know that they are dreaming, and may control oneiric content. Dreaming and LD features have been studied in North Americans, Europeans and Asians, but not among Brazilians, the largest population in Latin America. Here we investigated dreams and LD characteristics in a Brazilian sample (n=3,427; median age=25 years) through an online survey. The subjects reported recalling dreams at least once a week (76%), and that dreams typically depicted actions (93%), known people (92%), sounds/voices (78%), and colored images (76%). The oneiric content was associated with plans for the upcoming days (37%), memories of the previous day (13%), or unrelated to the dreamer (30%). Nightmares usually depicted anxiety/fear (65%), being stalked (48%), or other unpleasant sensations(47%). These data corroborate Freudian notion of day residue in dreams, and suggest that dreams and nightmares are simulations of life situations that are related to our psychobiological integrity. Regarding LD, we observed that 77% of the subjects experienced LD at least once in life (44% up to 10 episodes ever), and for 48% LD subjectively lasted less than 1 min. LD frequency correlated weakly with dream recall frequency (r =0.20,p< 0.01), and LD control was rare (29%). LD occurrence was facilitated when subjects did not need to wake up early (38%), a situation that increases rapid eye movement sleep (REMS) duration, or when subjects were under stress (30%), which increases REMS transitions into waking. These results indicate that LD is relatively ubiquitous but rare, unstable, difficult to control, and facilitated by increases in REMS duration and transitions to wake state. Together with LD incidence in USA, Europe and Asia, our data from Latin America strengthen the notion that LD is a general phenomenon of the human species.
Resumo:
Early psychiatry investigated dreams to understand psychopathologies. Contemporary psychiatry, which neglects dreams, has been criticized for lack of objectivity. In search of quantitative insight into the structure of psychotic speech, we investigated speech graph attributes (SGA) in patients with schizophrenia, bipolar disorder type I, and non-psychotic controls as they reported waking and dream contents. Schizophrenic subjects spoke with reduced connectivity, in tight correlation with negative and cognitive symptoms measured by standard psychometric scales. Bipolar and control subjects were undistinguishable by waking reports, but in dream reports bipolar subjects showed significantly less connectivity. Dream-related SGA outperformed psychometric scores or waking-related data for group sorting. Altogether, the results indicate that online and offline processing, the two most fundamental modes of brain operation, produce nearly opposite effects on recollections: While dreaming exposes differences in the mnemonic records across individuals, waking dampens distinctions. The results also demonstrate the feasibility of the differential diagnosis of psychosis based on the analysis of dream graphs, pointing to a fast, low-cost and language-invariant tool for psychiatric diagnosis and the objective search for biomarkers. The Freudian notion that ‘‘dreams are the royal road to the unconscious’’ is clinically useful, after all.