987 resultados para virtual property theft
Resumo:
The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.
Resumo:
O sistema Diagnose Virtual é um ambiente virtual na WEB para diagnóstico de doenças de plantas e enfermidades de animais que se utiliza de mecanismos de inferência (investigação) aplicados sobre o conhecimento de especialistas previamente categorizado. Este documento tem por objetivo orientar o usuário do sistema Diagnose Virtual no procedimento para sua utilização visando obter resultados corretos com menor esforço. O sistema é também dotado de ajuda online, na qual cada funcionalidade do sistema é descrita de forma sucinta mostrada desde que o ponteiro do mouse fique parado por um instante em cima da funcionalidade. Outra forma de ajuda pode ser obtida a cada tela, clicando o símbolo de interrogação no canto inferior direito. O documento aborda o módulo do usuário/produtor, no qual são exploradas as características de um problema (um caso) de uma determinada cultura até obter-se o diagnóstico. Como resultados são fornecidas as possíveis desordens com seus respectivos graus de certeza.
Resumo:
O sistema Diagnose Virtual é um ambiente virtual na WEB para diagnóstico de doenças de plantas e enfermidades de animais, que utiliza mecanismos de inferência baseados em conhecimentos de especialistas para simular o processo de diagnóstico. Este documento tem por objetivo orientar o usuário do sistema Diagnose Virtual no procedimento para sua utilização, visando obter resultados corretos com menor esforço.
Resumo:
Interaction of traditional Chinese Herb Rhizoma Chuanxiong and protein was studied by microdialysis coupled with high performance liquid chromatography. Compounds in Rhizoma Chuanxiong, such as ferulic acid, senkyunolide A and 3-butylphthalide, were identified by HPLC, HPLC-MS and UV-vis. Microdialysis recoveries and binding degrees of compounds in Rhizoma Chuanxiong with human serum albumin (HSA) and other human plasma protein were determined: recoveries of microdialysis sampling ranged from 36.7 to 98.4% with R.S.D. below 3.1%; while binding to HSA ranged from 0 to 91.5% (0.3 mM HSA) and from 0 to 93.5% (0.6 mM HSA), respectively. Compared with HSA, most of compounds bound to human blood serum more extensively and the results showed that binding of these compounds in Rhizoma Chuanxiong was influenced by pH. Two compounds were found to bind to HSA and human blood serum. their binding degrees were consistent with ferulic acid and 3-butylphthalide, the active compounds in Rhizoma Chuanoxiong. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Sin índice de impacto (2012)
Resumo:
Urquhart, C., Spink, S., Thomas, R., Yeoman, A., Durbin, J., Turner, J., Fenton, R. & Armstrong, C. (2004). Evaluating the development of virtual learning environments in higher and further education. In J. Cook (Ed.), Blue skies and pragmatism: learning technologies for the next decade. Research proceedings of the 11th Association for Learning Technology conference (ALT-C 2004), 14-16 September 2004, University of Exeter, Devon, England (pp. 157-169). Oxford: Association for Learning Technology Sponsorship: JISC
Resumo:
Yeoman, A., Urquhart, C. & Sharp, S. (2003). Moving Communities of Practice forward: the challenge for the National electronic Library for Health and its Virtual Branch Libraries. Health Informatics Journal, 9(4), 241-252. Previously appeared as a conference paper for the iSHIMR2003 conference (Proceedings of the Eighth International Symposium on Health Information Management Research, June 1-3, 2003, Boras, Sweden) Sponsorship: NHS Information Authority/National electronic Library for Health
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 5, pp. 1338-1343, 2003.
Resumo:
This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.
Resumo:
With the increased use of "Virtual Machines" (VMs) as vehicles that isolate applications running on the same host, it is necessary to devise techniques that enable multiple VMs to share underlying resources both fairly and efficiently. To that end, one common approach is to deploy complex resource management techniques in the hosting infrastructure. Alternately, in this paper, we advocate the use of self-adaptation in the VMs themselves based on feedback about resource usage and availability. Consequently, we define a "Friendly" VM (FVM) to be a virtual machine that adjusts its demand for system resources, so that they are both efficiently and fairly allocated to competing FVMs. Such properties are ensured using one of many provably convergent control rules, such as AIMD. By adopting this distributed application-based approach to resource management, it is not necessary to make assumptions about the underlying resources nor about the requirements of FVMs competing for these resources. To demonstrate the elegance and simplicity of our approach, we present a prototype implementation of our FVM framework in User-Mode Linux (UML)-an implementation that consists of less than 500 lines of code changes to UML. We present an analytic, control-theoretic model of FVM adaptation, which establishes convergence and fairness properties. These properties are also backed up with experimental results using our prototype FVM implementation.
Resumo:
As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.
Resumo:
The Java programming language has been widely described as secure by design. Nevertheless, a number of serious security vulnerabilities have been discovered in Java, particularly in the component known as the Bytecode Verifier. This paper describes a method for representing Java security constraints using the Alloy modeling language. It further describes a system for performing a security analysis on any block of Java bytecodes by converting the bytes into relation initializers in Alloy. Any counterexamples found by the Alloy analyzer correspond directly to insecure code. Analysis of a real-world malicious applet is given to demonstrate the efficacy of the approach.
Resumo:
The Internet and World Wide Web have had, and continue to have, an incredible impact on our civilization. These technologies have radically influenced the way that society is organised and the manner in which people around the world communicate and interact. The structure and function of individual, social, organisational, economic and political life begin to resemble the digital network architectures upon which they are increasingly reliant. It is increasingly difficult to imagine how our ‘offline’ world would look or function without the ‘online’ world; it is becoming less meaningful to distinguish between the ‘actual’ and the ‘virtual’. Thus, the major architectural project of the twenty-first century is to “imagine, build, and enhance an interactive and ever changing cyberspace” (Lévy, 1997, p. 10). Virtual worlds are at the forefront of this evolving digital landscape. Virtual worlds have “critical implications for business, education, social sciences, and our society at large” (Messinger et al., 2009, p. 204). This study focuses on the possibilities of virtual worlds in terms of communication, collaboration, innovation and creativity. The concept of knowledge creation is at the core of this research. The study shows that scholars increasingly recognise that knowledge creation, as a socially enacted process, goes to the very heart of innovation. However, efforts to build upon these insights have struggled to escape the influence of the information processing paradigm of old and have failed to move beyond the persistent but problematic conceptualisation of knowledge creation in terms of tacit and explicit knowledge. Based on these insights, the study leverages extant research to develop the conceptual apparatus necessary to carry out an investigation of innovation and knowledge creation in virtual worlds. The study derives and articulates a set of definitions (of virtual worlds, innovation, knowledge and knowledge creation) to guide research. The study also leverages a number of extant theories in order to develop a preliminary framework to model knowledge creation in virtual worlds. Using a combination of participant observation and six case studies of innovative educational projects in Second Life, the study yields a range of insights into the process of knowledge creation in virtual worlds and into the factors that affect it. The study’s contributions to theory are expressed as a series of propositions and findings and are represented as a revised and empirically grounded theoretical framework of knowledge creation in virtual worlds. These findings highlight the importance of prior related knowledge and intrinsic motivation in terms of shaping and stimulating knowledge creation in virtual worlds. At the same time, they highlight the importance of meta-knowledge (knowledge about knowledge) in terms of guiding the knowledge creation process whilst revealing the diversity of behavioural approaches actually used to create knowledge in virtual worlds and. This theoretical framework is itself one of the chief contributions of the study and the analysis explores how it can be used to guide further research in virtual worlds and on knowledge creation. The study’s contributions to practice are presented as actionable guide to simulate knowledge creation in virtual worlds. This guide utilises a theoretically based classification of four knowledge-creator archetypes (the sage, the lore master, the artisan, and the apprentice) and derives an actionable set of behavioural prescriptions for each archetype. The study concludes with a discussion of the study’s implications in terms of future research.
Resumo:
This Portfolio of Exploration (PoE) tracks a transformative learning developmental journey that is directed at changing meaning making structures and mental models within an innovation practice. The explicit purpose of the Portfolio is to develop new and different perspectives that enable the handling of new and more complex phenomena through self transformation and increased emotional intelligence development. The Portfolio provides a response to the question: ‘What are the key determinants that enable a Virtual Team (VT) to flourish where flourishing means developing and delivering on the firm’s innovative imperatives?’ Furthermore, the PoE is structured as an investigation into how higher order meaning making promotes ‘entrepreneurial services’ within an intra-firm virtual team, with a secondary aim to identify how reasoning about trust influence KGPs to exchange knowledge. I have developed a framework which specifically focuses on the effectiveness of any firms’ Virtual Team (VT) through transforming the meaning making of the VT participants. I hypothesized it is the way KGPs make meaning (reasoning about trust) which differentiates the firm as a growing firm in the sense of Penrosean resources: ‘inducement to expand and a limit of expansion’ (1959). Reasoning about trust is used as a higher order meaning-making concept in line with Kegan’s (1994) conception of complex meaning making, which is the combining of ideas and data in ways that transform meaning and implicates participants to find new ways of knowledge generation. Simply, it is the VT participants who develop higher order meaning making that hold the capabilities to transform the firm from within, providing a unique competitive advantage that enables the firm to grow.