820 resultados para Context information
Resumo:
The literature reports research efforts allowing the editing of interactive TV multimedia documents by end-users. In this article we propose complementary contributions relative to end-user generated interactive video, video tagging, and collaboration. In earlier work we proposed the watch-and-comment (WaC) paradigm as the seamless capture of an individual`s comments so that corresponding annotated interactive videos be automatically generated. As a proof of concept, we implemented a prototype application, the WACTOOL, that supports the capture of digital ink and voice comments over individual frames and segments of the video, producing a declarative document that specifies both: different media stream structure and synchronization. In this article, we extend the WaC paradigm in two ways. First, user-video interactions are associated with edit commands and digital ink operations. Second, focusing on collaboration and distribution issues, we employ annotations as simple containers for context information by using them as tags in order to organize, store and distribute information in a P2P-based multimedia capture platform. We highlight the design principles of the watch-and-comment paradigm, and demonstrate related results including the current version of the WACTOOL and its architecture. We also illustrate how an interactive video produced by the WACTOOL can be rendered in an interactive video environment, the Ginga-NCL player, and include results from a preliminary evaluation.
Resumo:
Research objectives Poker and responsible gambling both entail the use of the executive functions (EF), which are higher-level cognitive abilities. The main objective of this work was to assess if online poker players of different ability show different performances in their EF and if so, which functions are the most discriminating ones. The secondary objective was to assess if the EF performance can predict the quality of gambling, according to the Gambling Related Cognition Scale (GRCS), the South Oaks Gambling Screen (SOGS) and the Problem Gambling Severity Index (PGSI). Sample and methods The study design consisted of two stages: 46 Italian active players (41m, 5f; age 32±7,1ys; education 14,8±3ys) fulfilled the PGSI in a secure IT web system and uploaded their own hand history files, which were anonymized and then evaluated by two poker experts. 36 of these players (31m, 5f; age 33±7,3ys; education 15±3ys) accepted to take part in the second stage: the administration of an extensive neuropsychological test battery by a blinded trained professional. To answer the main research question we collected all final and intermediate scores of the EF tests on each player together with the scoring on the playing ability. To answer the secondary research question, we referred to GRCS, PGSI and SOGS scores. We determined which variables that are good predictors of the playing ability score using statistical techniques able to deal with many regressors and few observations (LASSO, best subset algorithms and CART). In this context information criteria and cross-validation errors play a key role for the selection of the relevant regressors, while significance testing and goodness-of-fit measures can lead to wrong conclusions. Preliminary findings We found significant predictors of the poker ability score in various tests. In particular, there are good predictors 1) in some Wisconsin Card Sorting Test items that measure flexibility in choosing strategy of problem-solving, strategic planning, modulating impulsive responding, goal setting and self-monitoring, 2) in those Cognitive Estimates Test variables related to deductive reasoning, problem solving, development of an appropriate strategy and self-monitoring, 3) in the Emotional Quotient Inventory Short (EQ-i:S) Stress Management score, composed by the Stress Tolerance and Impulse Control scores, and in the Interpersonal score (Empathy, Social Responsibility, Interpersonal Relationship). As for the quality of gambling, some EQ-i:S scales scores provide the best predictors: General Mood for the PGSI; Intrapersonal (Self-Regard; Emotional Self-Awareness, Assertiveness, Independence, Self-Actualization) and Adaptability (Reality Testing, Flexibility, Problem Solving) for the SOGS, Adaptability for the GRCS. Implications for the field Through PokerMapper we gathered knowledge and evaluated the feasibility of the construction of short tasks/card games in online poker environments for profiling users’ executive functions. These card games will be part of an IT system able to dynamically profile EF and provide players with a feedback on their expected performance and ability to gamble responsibly in that particular moment. The implementation of such system in existing gambling platforms could lead to an effective proactive tool for supporting responsible gambling.
Resumo:
Informação e tecnologia de informação são dois temas de grande interesse na atualidade, que alguns denominam de Era da Informação. Neste contexto, a tecnologia de informação ( conjunto formado pelos computadores e pelas telecomunicações ), vem experimentando um desenvolvimento acelerado, oferecendo uma série de oportunidades de uso nas organizações, automatizando tarefas, das mais rotineiras às mais complexas e gerando informações para a operação e o processo decisório. As mudanças na forma como o trabalho é realizado, consequência da utilização da tecnologia, afetam diversos aspectos das organizações, com destaque para os seus recursos humanos. A prática tem mesmo mostrado que o sucesso na utilização da tecnologia não depende somente de aspectos técnicos. Diversas pesqUisas realizadas evidenciam ser essencial um alinhamento, uma integração harmônica entre elementos como informação, tecnologia e recursos humanos, além de outros como a cultura e a gerência das organizações. Para tanto, é importante que os envolvidos, administradores e especialistas na tecnologia, tenham uma compreensão ampliada destes elementos e de seu relacionamento. Um primeiro passo para esta compreensão pode ser encontrado pela análise das Teorias de Organização existentes. Um estudo mais aprofundado da informação, no contexto do processo de comunicação, revela ser esta um conceito complexo, relacionado ao conhecimento, aos seres humanos e à tecnologia de informação. Por sua vez, o relacionamento da tecnologia em geral com a organização, é descrito, sob diversos pontos de vista teóricos, que apontam a ligação entre a etapa de projeto da tecnologia e sua utilização e o papel dos seres humanos neste processo. o reconhecimento da estreita interdependência entre a tecnologia e a organização leva a propostas de modelos que permitem viabilizar uma implementação bem sucedida da tecnologia. Com base nestas propostas, e no conhecimento de cada elemento específico envolvido, surgem novos conceitos como a hipertecnologia e a hiperorganização, que podem facilitar o tratamento das questões relacionadas no âmbito das organizações.
Resumo:
The popularization of the Internet has stimulated the appearance of Search Engines that have as their objective aid the users in the Web information research process. However, it s common for users to make queries and receive results which do not satisfy their initial needs. The Information Retrieval in Context (IRiX) technique allows for the information related to a specific theme to be related to the initial user query, enabling, in this way, better results. This study presents a prototype of a search engine based on contexts built from linguistic gatherings and on relationships defined by the user. The context information can be shared with softwares and other tool users with the objective of promoting a socialization of contexts
Resumo:
Context-aware applications are typically dynamic and use services provided by several sources, with different quality levels. Context information qualities are expressed in terms of Quality of Context (QoC) metadata, such as precision, correctness, refreshment, and resolution. On the other hand, service qualities are expressed via Quality of Services (QoS) metadata such as response time, availability and error rate. In order to assure that an application is using services and context information that meet its requirements, it is essential to continuously monitor the metadata. For this purpose, it is needed a QoS and QoC monitoring mechanism that meet the following requirements: (i) to support measurement and monitoring of QoS and QoC metadata; (ii) to support synchronous and asynchronous operation, thus enabling the application to periodically gather the monitored metadata and also to be asynchronously notified whenever a given metadata becomes available; (iii) to use ontologies to represent information in order to avoid ambiguous interpretation. This work presents QoMonitor, a module for QoS and QoC metadata monitoring that meets the abovementioned requirement. The architecture and implementation of QoMonitor are discussed. To support asynchronous communication QoMonitor uses two protocols: JMS and Light-PubSubHubbub. In order to illustrate QoMonitor in the development of ubiquitous application it was integrated to OpenCOPI (Open COntext Platform Integration), a Middleware platform that integrates several context provision middleware. To validate QoMonitor we used two applications as proofof- concept: an oil and gas monitoring application and a healthcare application. This work also presents a validation of QoMonitor in terms of performance both in synchronous and asynchronous requests
Resumo:
Pervasive applications use context provision middleware support as infrastructures to provide context information. Typically, those applications use communication publish/subscribe to eliminate the direct coupling between components and to allow the selective information dissemination based in the interests of the communicating elements. The use of composite events mechanisms together with such middlewares to aggregate individual low level events, originating from of heterogeneous sources, in high level context information relevant for the application. CES (Composite Event System) is a composite events mechanism that works simultaneously in cooperation with several context provision middlewares. With that integration, applications use CES to subscribe to composite events and CES, in turn, subscribes to the primitive events in the appropriate underlying middlewares and notifies the applications when the composed events happen. Furthermore, CES offers a language with a group of operators for the definition of composite events that also allows context information sharing
Resumo:
The increasing use of mobile devices and wireless communication technologies has improved the access to web information systems. However, the development of these systems imposes new challenges mainly due to the heterogeneity of mobile devices, the management of context information, and the complexity of adaptation process. Especially in this case, these systems should be runable on a great number of mobile devices models. In this article, we describe a context-aware architecture that provides solutions to the challenges presented above for the development of education administration systems. Copyright 2009 ACM.
Resumo:
Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.
Resumo:
[EN]During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.
Resumo:
The full exploitation of multi-hop multi-path connectivity opportunities offered by heterogeneous wireless interfaces could enable innovative Always Best Served (ABS) deployment scenarios where mobile clients dynamically self-organize to offer/exploit Internet connectivity at best. Only novel middleware solutions based on heterogeneous context information can seamlessly enable this scenario: middleware solutions should i) provide a translucent access to low-level components, to achieve both fully aware and simplified pre-configured interactions, ii) permit to fully exploit communication interface capabilities, i.e., not only getting but also providing connectivity in a peer-to-peer fashion, thus relieving final users and application developers from the burden of directly managing wireless interface heterogeneity, and iii) consider user mobility as crucial context information evaluating at provision time the suitability of available Internet points of access differently when the mobile client is still or in motion. The novelty of this research work resides in three primary points. First of all, it proposes a novel model and taxonomy providing a common vocabulary to easily describe and position solutions in the area of context-aware autonomic management of preferred network opportunities. Secondly, it presents PoSIM, a context-aware middleware for the synergic exploitation and control of heterogeneous positioning systems that facilitates the development and portability of location-based services. PoSIM is translucent, i.e., it can provide application developers with differentiated visibility of data characteristics and control possibilities of available positioning solutions, thus dynamically adapting to application-specific deployment requirements and enabling cross-layer management decisions. Finally, it provides the MMHC solution for the self-organization of multi-hop multi-path heterogeneous connectivity. MMHC considers a limited set of practical indicators on node mobility and wireless network characteristics for a coarsegrained estimation of expected reliability/quality of multi-hop paths available at runtime. In particular, MMHC manages the durability/throughput-aware formation and selection of different multi-hop paths simultaneously. Furthermore, MMHC provides a novel solution based on adaptive buffers, proactively managed based on handover prediction, to support continuous services, especially by pre-fetching multimedia contents to avoid streaming interruptions.
Resumo:
Momentary brain electric field configurations are manifestations of momentary global functional states of the brain. Field configurations tend to persist over some time in the sub-second range (“microstates”) and concentrate within few classes of configurations. Accordingly, brain field data can be reduced efficiently into sequences of re-occurring classes of brain microstates, not overlapping in time. Different configurations must have been caused by different active neural ensembles, and thus different microstates assumedly implement different functions. The question arises whether the aberrant schizophrenic mentation is associated with specific changes in the repertory of microstates. Continuous sequences of brain electric field maps (multichannel EEG resting data) from 9 neuroleptic-naive, first-episode, acute schizophrenics and from 18 matched controls were analyzed. The map series were assigned to four individual microstate classes; these were tested for differences between groups. One microstate class displayed significantly different field configurations and shorter durations in patients than controls; degree of shortening correlated with severity of paranoid symptomatology. The three other microstate classes showed no group differences related to psychopathology. Schizophrenic thinking apparently is not a continuous bias in brain functions, but consists of intermittent occurrences of inappropriate brain microstates that open access to inadequate processing strategies and context information
Resumo:
Context: Information currently available on the trafficking of minors in the U.S. for commercial sexual exploitation includes approximations of the numbers involved, risk factors that increase the likelihood of victimization and methods of recruitment and control. However, specific characteristics about this vulnerable population remain largely unknown. Objective: This article has two distinct purposes. The first is to provide the reader with an overview of available information on minor sex trafficking in the U.S. The second is to present findings and discuss policy, research, and educational implications from secondary data analysis of 115 cases of minor sex trafficking in the U.S. Design: Minor sex trafficking cases were identified through two main venues - a review of U.S. Department of Justice press releases of human trafficking cases and an online search of media reports. Searches covered the time period from October 28, 2000, which coincided with the passage of the VTVPA through October 31, 2009. Cases were included in analysis if the incident involved at least one victim under the age of 18, occurred in the U.S., and at least one perpetrator had been arrested, indicted, or convicted. Results: A total of 115 separate incidents involving at least 153 victims were located. These occurrences involved 215 perpetrators, with the majority of them having been convicted (n = 117, 53.4%), The number of victims involved in a single incident ranged from 1 to 9. Over 90% of victims were female who ranged in age from 5 to 17 years. There were more U.S. minor victims than those from other countries. Victims had been in captivity from less than 6 months to 5 years. Minors most commonly fell into exploitation through some type of false promise indicated (16.3%, n = 25), followed by kidnapping (9.8%, n = 15). Over a fifth of the sample (22.2%, n = 34) were abused through two commercial sex practices, with almost all (94.1%, n = 144) used in prostitution. One of every five victims (24.8%, n = 38) had been advertised on an Internet website. Conclusions: Results of a review of known information about minor sex trafficking and findings from analysis of 115 incidents of the sex trafficking of youth in the U.S. indicate a need for stronger legislation to educate various professional groups, more comprehensive services for victims, stricter laws for pimps and traffickers, and preventive educational interventions beginning at a young age.
Resumo:
User experience on watching live videos must be satisfactory even under the inuence of different network conditions and topology changes, such as happening in Flying Ad-Hoc Networks (FANETs). Routing services for video dissemination over FANETs must be able to adapt routing decisions at runtime to meet Quality of Experience (QoE) requirements. In this paper, we introduce an adaptive beaconless opportunistic routing protocol for video dissemination over FANETs with QoE support, by taking into account multiple types of context information, such as link quality, residual energy, buffer state, as well as geographic information and node mobility in a 3D space. The proposed protocol takes into account Bayesian networks to define weight vectors and Analytic Hierarchy Process (AHP) to adjust the degree of importance for the context information based on instantaneous values. It also includes a position prediction to monitor the distance between two nodes in order to detect possible route failure.
Resumo:
Este trabajo presenta una reflexión metodológica relativa al uso de escalas de estimaciones sumadas o Likert en la evaluación del desempeño docente en el contexto universitario. Se presentan antecedentes en el marco de las prescripciones técnicas para este tipo de escalamientos, así como un conjunto de observaciones referidas a la pertinencia de su aplicación con fines evaluativos y a sus limitaciones en tanto herramienta para la generación de conocimiento. Se concluye que la escala de Likert puede ser utilizada en contexto evaluativos, atendiendo al conjunto de requerimientos ligados a su aplicación y tratamiento analítico-interpretativo, reconociendo los problemas insalvables que presenta, de manera de sopesar y relativizar la construcción del dato numeral. De este modo, se hace explícita la crítica al carácter "quantofrénico" y "artefactual" que acompaña su aplicación, y que contradictoriamente se inscribe en un discurso que sitúa la evaluación docente en el marco de políticas de calidad en educación superior
Resumo:
Este trabajo presenta una reflexión metodológica relativa al uso de escalas de estimaciones sumadas o Likert en la evaluación del desempeño docente en el contexto universitario. Se presentan antecedentes en el marco de las prescripciones técnicas para este tipo de escalamientos, así como un conjunto de observaciones referidas a la pertinencia de su aplicación con fines evaluativos y a sus limitaciones en tanto herramienta para la generación de conocimiento. Se concluye que la escala de Likert puede ser utilizada en contexto evaluativos, atendiendo al conjunto de requerimientos ligados a su aplicación y tratamiento analítico-interpretativo, reconociendo los problemas insalvables que presenta, de manera de sopesar y relativizar la construcción del dato numeral. De este modo, se hace explícita la crítica al carácter "quantofrénico" y "artefactual" que acompaña su aplicación, y que contradictoriamente se inscribe en un discurso que sitúa la evaluación docente en el marco de políticas de calidad en educación superior