809 resultados para Multi-dimensional Numbered Information Spaces


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Марусия Н. Славчова-Божкова - В настоящата работа се обобщава една гранична теорема за докритичен многомерен разклоняващ се процес, зависещ от възрастта на частиците с два типа имиграция. Целта е да се обобщи аналогичен резултат в едномерния случай като се прилагат “coupling” метода, теория на възстановяването и регенериращи процеси.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examines the extent to which blacks are segregated in the suburban community of Coconut Grove, Florida. Hypersegregation, or the general tendency for blacks and whites to live apart, was examined in terms of four distinct dimensions: evenness, exposure, clustering, and concentration. Together, these dimensions define the geographic traits of the target area. Alone these indices can not capture the multi-dimensional levels of segregation and, therefore, by themselves underestimate the severity of segregation and isolation in this community. This study takes a contemporary view of segregation in a Dade County community to see if segregation is the catalyst to the sometime cited violent response of blacks. This study yields results that support the information in the literature review and the thesis research questions sections namely, that the blacks within the Grove do respond violently to the negative effects that racial segregation causes. This thesis is unique in two ways. It examines segregation in a suburban environment rather than an urban inner city, and it presents a responsive analysis of the individuals studied, rather than relying only on demographic and statistical data. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of smart TVs has reshaped the TV-consumer interaction by combining TVs with mobile-like applications and access to the Internet. However, consumers are still unable to seamlessly interact with the contents being streamed. An example of such limitation is TV shopping, in which a consumer makes a purchase of a product or item displayed in the current TV show. Currently, consumers can only stop the current show and attempt to find a similar item in the Web or an actual store. It would be more convenient if the consumer could interact with the TV to purchase interesting items. ^ Towards the realization of TV shopping, this dissertation proposes a scalable multimedia content processing framework. Two main challenges in TV shopping are addressed: the efficient detection of products in the content stream, and the retrieval of similar products given a consumer-selected product. The proposed framework consists of three components. The first component performs computational and temporal aware multimedia abstraction to select a reduced number of frames that summarize the important information in the video stream. By both reducing the number of frames and taking into account the computational cost of the subsequent detection phase, this component component allows the efficient detection of products in the stream. The second component realizes the detection phase. It executes scalable product detection using multi-cue optimization. Additional information cues are formulated into an optimization problem that allows the detection of complex products, i.e., those that do not have a rigid form and can appear in various poses. After the second component identifies products in the video stream, the consumer can select an interesting one for which similar ones must be located in a product database. To this end, the third component of the framework consists of an efficient, multi-dimensional, tree-based indexing method for multimedia databases. The proposed index mechanism serves as the backbone of the search. Moreover, it is able to efficiently bridge the semantic gap and perception subjectivity issues during the retrieval process to provide more relevant results.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe how the pathfinder algorithm converts relatedness ratings of concept pairs to concept maps; we also present how this algorithm has been used to develop the Concept Maps for Learning website (www.conceptmapsforlearning.com) based on the principles of effective formative assessment. The pathfinder networks, one of the network representation tools, claim to help more students memorize and recall the relations between concepts than spatial representation tools (such as Multi- Dimensional Scaling). Therefore, the pathfinder networks have been used in various studies on knowledge structures, including identifying students’ misconceptions. To accomplish this, each student’s knowledge map and the expert knowledge map are compared via the pathfinder software, and the differences between these maps are highlighted. After misconceptions are identified, the pathfinder software fails to provide any feedback on these misconceptions. To overcome this weakness, we have been developing a mobile-based concept mapping tool providing visual, textual and remedial feedback (ex. videos, website links and applets) on the concept relations. This information is then placed on the expert concept map, but not on the student’s concept map. Additionally, students are asked to note what they understand from given feedback, and given the opportunity to revise their knowledge maps after receiving various types of feedback.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional taught learning practices often experience difficulties in keeping students motivated and engaged. Video games, however, are very successful at sustaining high levels of motivation and engagement through a set of tasks for hours without apparent loss of focus. In addition, gamers solve complex problems within a gaming environment without feeling fatigue or frustration, as they would typically do with a comparable learning task. Based on this notion, the academic community is keen on exploring methods that can deliver deep learner engagement and has shown increased interest in adopting gamification – the integration of gaming elements, mechanics, and frameworks into non-game situations and scenarios – as a means to increase student engagement and improve information retention. Its effectiveness when applied to education has been debatable though, as attempts have generally been restricted to one-dimensional approaches such as transposing a trivial reward system onto existing teaching materials and/or assessments. Nevertheless, a gamified, multi-dimensional, problem-based learning approach can yield improved results even when applied to a very complex and traditionally dry task like the teaching of computer programming, as shown in this paper. The presented quasi-experimental study used a combination of instructor feedback, real time sequence of scored quizzes, and live coding to deliver a fully interactive learning experience. More specifically, the “Kahoot!” Classroom Response System (CRS), the classroom version of the TV game show “Who Wants To Be A Millionaire?”, and Codecademy’s interactive platform formed the basis for a learning model which was applied to an entry-level Python programming course. Students were thus allowed to experience multiple interlocking methods similar to those commonly found in a top quality game experience. To assess gamification’s impact on learning, empirical data from the gamified group were compared to those from a control group who was taught through a traditional learning approach, similar to the one which had been used during previous cohorts. Despite this being a relatively small-scale study, the results and findings for a number of key metrics, including attendance, downloading of course material, and final grades, were encouraging and proved that the gamified approach was motivating and enriching for both students and instructors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poverty among children and young people turns out to be a significant social problem in Poland. The complexity and multi-dimensional consequences of the impoverishment of the young generation families calls not only for social help provided by the state but also for effective actions of the school environment. The following text attempts to answer the key question phrased in the title. It may be referred to as a source of information on significant problems of the school environment reality which demand further reflection and proper action. The research was carried out in 2014 among 146 urban and rural school teachers with the use of a diagnostic survey. The main tool which helped to gather information was a questionnaire survey as well as an open interview. The survey results helped to estimate the situation in a comparative perspective of the diagnosed schools with references to the best practices from other European countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research described in this thesis was motivated by the need of a robust model capable of representing 3D data obtained with 3D sensors, which are inherently noisy. In addition, time constraints have to be considered as these sensors are capable of providing a 3D data stream in real time. This thesis proposed the use of Self-Organizing Maps (SOMs) as a 3D representation model. In particular, we proposed the use of the Growing Neural Gas (GNG) network, which has been successfully used for clustering, pattern recognition and topology representation of multi-dimensional data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models, without considering time constraints. It is proposed a hardware implementation leveraging the computing power of modern GPUs, which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). The proposed methods were applied to different problem and applications in the area of computer vision such as the recognition and localization of objects, visual surveillance or 3D reconstruction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With its powerful search engines and billions of published pages, the Worldwide Web has become the ultimate tool to explore the human experience. But, despite the advent of the digital revolution, e-books, at their core, have remained remarkably similar to their printed siblings. This has resulted in a clear dichotomy between two ways of reading: on one side, the multi-dimensional world of the Web; on the other, the linearity of books and e-books. My investigation of the literature indicates that the focus of attempts to merge these two modes of production, and hence of reading, has been the insertion of interactivity into fiction. As I will show in the Literature Review, a clear thrust of research since the early 1990s, and in my opinion the most significant, has concentrated on presenting the reader with choices that affect the plot. This has resulted in interactive stories in which the structure of the narrative can be altered by the reader of experimental fiction. The interest in this area of research is not surprising, as the interaction of readers with the fabric of the narrative provides a fertile ground for exploring, analysing, and discussing issues of plot consistency and continuity. I found in the literature several papers concerned with the effects of hyperlinking on literature, but none about how hyperlinked material and narrative could be integrated without compromising the narrative flow as designed by the author. It led me to think that the researchers had accepted hypertextuality and the linear organisation of fiction as being antithetical, thereby ignoring the possibility of exploiting the first while preserving the second. All the works I consulted were focussed on exploring the possibilities provided to authors (and readers) by hypertext or how hypertext literature affects literary criticism. This was true in earlier works by Landow and Harpold and remained true in later works by Bolter and Grusin. To quote another example, in his book Hypertext 3.0, Landow states: “Most who have speculated on the relation between hypertextuality and fiction concentrate [...] on the effects it will have on linear narrative”, and “hypertext opens major questions about story and plot by apparently doing away with linear organization” (Landow, 2006, pp. 220, 221). In other words, the authors have added narrative elements to Web pages, effectively placing their stories in a subordinate role. By focussing on “opening up” the plots, the researchers have missed the opportunity to maintain the integrity of their stories and use hyperlinked information to provide interactive access to backstory and factual bases. This would represent a missing link between the traditional way of reading, in which the readers have no influence on the path the author has laid out for them, and interactive narrative, in which the readers choose their way across alternatives, thereby, at least to a certain extent, creating their own path. It would be, to continue the metaphor, as if the readers could follow the main path created by the author while being able to get “sidetracked” into exploring hyperlinked material. In Hypertext 3.0, Landow refers to an “Axial structure [of hypertext] characteristic of electronic books and scholarly books with foot-and endnotes” versus a “Network structure of hypertext” (Landow, 2006, p. 70). My research aims at generalising the axial structure and extending it to fiction without losing the linearity at its core. In creative nonfiction, the introduction of places, scenes, and settings, together with characterisation, brings to life the facts without altering them; while much fiction draws on facts to provide a foundation, or narrative elements, for the work. But how can the reader distinguish between facts and representations? For example, to what extent do dialogues and perceptions present what was actually said and thought? Some authors of creative nonfiction use end-notes to provide comments and citations while minimising disruption the flow of the main text, but they are limited in scope and constrained in space. Each reader should be able to enjoy the narrative as if it were a novel but also to explore the facts at the level of detail s/he needs. For this to be possible, end-notes should provide a Web-like way of exploring in more detail what the author has already researched. My research aims to develop ways of integrating narrative prose and hyperlinked documents into a Hyperbook. Its goal is to create a new writing paradigm in which a story incorporates a gateway to detailed information. While creative nonfiction uses the techniques of fictional writing to provide reportage of actual events and fact-based fiction illuminates the affectual dimensions of what happened (e.g., Kate Grenville’s The Secret River and Hilary Mantel’s Wolf Hall), Hyperbooks go one step further and link narrative prose to the details of the events on which the narrative is based or, more in general, to information the reader might find of interest. My dissertation introduces and utilises Hyperbooks to engage in two parallel types of investigation Build knowledge about Italian WWII POWs held in Australia and present it as part of a novella in Hyperbook format. Develop a new piece of technology capable of extending the writing and reading process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital divide is an issue that concerns our technology dominated society. The parents of ubiquitous computing dreamt of a total proliferation of information technology. But the reality we live in is not yet prepared for this future. There is current a need to develop programs in order to diminish this difference between the digitally included and the excluded one. PROEJA-Transiarte is a project ran by Universidade de Brasília in the city of Ceilândia, Federal District of Brazil. It proposes a different approach on the issue of digital divide, by introducing the cooperative creation of cyberart, based on the life stories of each participant, into the regular curriculum of EJA (Educação de Jovens e Adultos) classes, thus implementing the concept of solidary education. This research project investigated the role played by the cooperative learning the students put in practice during the workshops of the project in the diminishing of the digital exclusion a great part of the students feel. It looked into their activities, analyzing the development of their cooperation, putting it next in the context of the digital and social inclusion. After a multi-dimensional research on the theme, in the context of PROEJA-Transiarte, the conclusion shows the impact cooperative learning has in the reduction of the digital divide, analyzing the perception of the currently involved students, the researchers active in the project, or the former students that had their lives improved because of the workshops they participated in.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La eliminación de barreras entre países es una consecuencia que llega con la globalización y con los acuerdos de TLC firmados en los últimos años. Esto implica un crecimiento significativo del comercio exterior, lo cual se ve reflejado en un aumento de la complejidad de la cadena de suministro de las empresas. Debido a lo anterior, se hace necesaria la búsqueda de alternativas para obtener altos niveles de productividad y competitividad dentro de las empresas en Colombia, ya que el entorno se ha vuelto cada vez más complejo, saturado de competencia no sólo nacional, sino también internacional. Para mantenerse en una posición competitiva favorable, las compañías deben enfocarse en las actividades que le agregan valor a su negocio, por lo cual una de las alternativas que se están adoptando hoy en día es la tercerización de funciones logísticas a empresas especializadas en el manejo de estos servicios. Tales empresas son los Proveedores de servicios logísticos (LSP), quienes actúan como agentes externos a la organización al gestionar, controlar y proporcionar actividades logísticas en nombre de un contratante. Las actividades realizadas pueden incluir todas o parte de las actividades logísticas, pero como mínimo la gestión y ejecución del transporte y almacenamiento deben estar incluidos (Berglund, 2000). El propósito del documento es analizar el papel de los Operadores Logísticos de Tercer nivel (3PL) como promotores del desempeño organizacional en las empresas colombianas, con el fin de informar a las MIPYMES acerca de los beneficios que se obtienen al trabajar con LSP como un medio para mejorar la posición competitiva del país.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the last decade, the combined use of chemometrics and molecular spectroscopic techniques has become a new alternative for direct drug determination, without the need of physical separation. Among the new methodologies developed, the application of PARAFAC in the decomposition of spectrofluorimetric data should be highlighted. The first objective of this article is to describe the theoretical basis of PARAFAC. For this purpose, a discussion about the order of chemometric methods used in multivariate calibration and the development of multi-dimensional methods is presented first. The other objective of this article is to divulge for the Brazilian chemical community the potential of the combination PARAFAC/spectrofluorimetry for the determination of drugs in complex biological matrices. For this purpose, two applications aiming at determining, respectively, doxorrubicine and salicylate in human plasma are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes U2DE, a finite-volume code that numerically solves the Euler equations. The code was used to perform multi-dimensional simulations of the gradual opening of a primary diaphragm in a shock tube. From the simulations, the speed of the developing shock wave was recorded and compared with other estimates. The ability of U2DE to compute shock speed was confirmed by comparing numerical results with the analytic solution for an ideal shock tube. For high initial pressure ratios across the diaphragm, previous experiments have shown that the measured shock speed can exceed the shock speed predicted by one-dimensional models. The shock speeds computed with the present multi-dimensional simulation were higher than those estimated by previous one-dimensional models and, thus, were closer to the experimental measurements. This indicates that multi-dimensional flow effects were partly responsible for the relatively high shock speeds measured in the experiments.