978 resultados para computer art


Relevância:

30.00% 30.00%

Publicador:

Resumo:


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selected ubiquitous technologies encourage collaborative participation between higher education students and educators within a virtual socially networked e-learning landscape. Multiple modes of teaching and learning, ranging from real world experiences, to text and digital images accessed within the Deakin Studies Online learning management system and a constructed virtual world in which the user's creative imagination transports them to the “other side” of their computer screens is discussed in this paper. These constructed environments support interaction between communities of learners and enable multiple simultaneous participants to access graphically built 3D environments, interact with digital artifacts and various functional tools and represent themselves through avatars, to communicate with other participants and engage in collaborative art learning. A narrative interpretative research approach was used to profile the 21st century higher education student learner, to investigate the lived experience and multiple art learning perspectives documented in student visual journal entries and art educator observations to ascertain if an e-technology rich augmented learning environment resulted in the establishment of more effective e-learning communities of practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, organizations face the problem of keeping their information protected, available and trustworthy. In this context, machine learning techniques have also been extensively applied to this task. Since manual labeling is very expensive, several works attempt to handle intrusion detection with traditional clustering algorithms. In this paper, we introduce a new pattern recognition technique called Optimum-Path Forest (OPF) clustering to this task. Experiments on three public datasets have showed that OPF classifier may be a suitable tool to detect intrusions on computer networks, since it outperformed some state-of-the-art unsupervised techniques. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. Security testing methodologies are the first step towards standardized security evaluation processes and understanding of how the security threats evolve over time. This dissertation analyzes some of the most used identifying differences and commonalities, useful to compare them and assess their quality. The dissertation then proposes a new enhanced methodology built by keeping the best of every analyzed methodology. The designed methodology is tested over different systems with very effective results, which is the main evidence that it could really be applied in practical cases. Most of the dissertation discusses and proves how the presented testing methodology could be applied to such different systems and even to evade security measures by inverting goals and scopes. Real cases are often hard to find in methodology' documents, in contrary this dissertation wants to show real and practical cases offering technical details about how to apply it. Electronic voting systems are the first field test considered, and Pvote and Scantegrity are the two tested electronic voting systems. The usability and effectiveness of the designed methodology for electronic voting systems is proved thanks to this field cases analysis. Furthermore reputation and anti virus engines have also be analyzed with similar results. The dissertation concludes by presenting some general guidelines to build a coordination-based approach of electronic voting systems to improve the security without decreasing the system modularity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aimed at addressing some of the issues that, at the state of the art, avoid the P300-based brain computer interface (BCI) systems to move from research laboratories to end users’ home. An innovative asynchronous classifier has been defined and validated. It relies on the introduction of a set of thresholds in the classifier, and such thresholds have been assessed considering the distributions of score values relating to target, non-target stimuli and epochs of voluntary no-control. With the asynchronous classifier, a P300-based BCI system can adapt its speed to the current state of the user and can automatically suspend the control when the user diverts his attention from the stimulation interface. Since EEG signals are non-stationary and show inherent variability, in order to make long-term use of BCI possible, it is important to track changes in ongoing EEG activity and to adapt BCI model parameters accordingly. To this aim, the asynchronous classifier has been subsequently improved by introducing a self-calibration algorithm for the continuous and unsupervised recalibration of the subjective control parameters. Finally an index for the online monitoring of the EEG quality has been defined and validated in order to detect potential problems and system failures. This thesis ends with the description of a translational work involving end users (people with amyotrophic lateral sclerosis-ALS). Focusing on the concepts of the user centered design approach, the phases relating to the design, the development and the validation of an innovative assistive device have been described. The proposed assistive technology (AT) has been specifically designed to meet the needs of people with ALS during the different phases of the disease (i.e. the degree of motor abilities impairment). Indeed, the AT can be accessed with several input devices either conventional (mouse, touchscreen) or alterative (switches, headtracker) up to a P300-based BCI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Doktorarbeit befasst sich mit klassischen Vektor-Spingläsern eine Art von ungeordneten Magneten - auf verschiedenen Gittertypen. Da siernbedeutsam für eine experimentelle Realisierung sind, ist ein theoretisches Verständnis von Spinglas-Modellen mit wenigen Spinkomponenten und niedriger Gitterdimension von großer Bedeutung. Da sich dies jedoch als sehr schwierigrnerweist, sind neue, aussichtsreiche Ansätze nötig. Diese Arbeit betrachtet daher den Limesrnunendlich vieler Spindimensionen. Darin entstehen mehrere Vereinfachungen im Vergleichrnzu Modellen niedriger Spindimension, so dass für dieses bedeutsame Problem Eigenschaften sowohl bei Temperatur Null als auch bei endlichen Temperaturenrnüberwiegend mit numerischen Methoden ermittelt werden. Sowohl hyperkubische Gitter als auch ein vielseitiges 1d-Modell werden betrachtet. Letzteres erlaubt es, unterschiedliche Universalitätsklassen durch bloßes Abstimmen eines einzigen Parameters zu untersuchen. "Finite-size scaling''-Formen, kritische Exponenten, Quotienten kritischer Exponenten und andere kritische Größen werden nahegelegt und mit numerischen Ergebnissen verglichen. Eine detaillierte Beschreibung der Herleitungen aller numerisch ausgewerteter Gleichungen wird ebenso angegeben. Bei Temperatur Null wird eine gründliche Untersuchung der Grundzustände und Defektenergien gemacht. Eine Reihe interessanter Größen wird analysiert und insbesondere die untere kritische Dimension bestimmt. Bei endlicher Temperatur sind der Ordnungsparameter und die Spinglas-Suszeptibilität über die numerisch berechnete Korrelationsmatrix zugänglich. Das Spinglas-Modell im Limes unendlich vieler Spinkomponenten kann man als Ausgangspunkt zur Untersuchung der natürlicheren Modelle mit niedriger Spindimension betrachten. Wünschenswert wäre natürlich ein Modell, das die Vorteile des ersten mit den Eigenschaften des zweiten verbände. Daher wird in Modell mit Anisotropie vorgeschlagen und getestet, mit welchem versucht wird, dieses Ziel zu erreichen. Es wird auf reizvolle Wege hingewiesen, das Modell zu nutzen und eine tiefergehende Beschäftigung anzuregen. Zuletzt werden sogenannte "real-space" Renormierungsgruppenrechnungen sowohl analytisch als auch numerisch für endlich-dimensionale Vektor-Spingläser mit endlicher Anzahl von Spinkomponenten durchgeführt. Dies wird mit einer zuvor bestimmten neuen Migdal-Kadanoff Rekursionsrelation geschehen. Neben anderen Größen wird die untere kritische Dimension bestimmt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physically-based modeling for computer animation allows to produce more realistic motions in less time without requiring the expertise of skilled animators. But, a computer animation is not only a numerical simulation based on classical mechanics since it follows a precise story-line. One common way to define aims in an animation is to add geometric constraints. There are several methods to manage these constraints within a physically-based framework. In this paper, we present an algorithm for constraints handling based on Lagrange multipliers. After few remarks on the equations of motion that we use, we present a first algorithm proposed by Platt. We show with a simple example that this method is not reliable. Our contribution consists in improving this algorithm to provide an efficient and robust method to handle simultaneous active constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient’s wishes and to achieve the desired results. To date, most plastic surgeons rely on either “free hand” 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient’s face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduced about two decades ago, computer-assisted orthopedic surgery (CAOS) has emerged as a new and independent area, due to the importance of treatment of musculoskeletal diseases in orthopedics and traumatology, increasing availability of different imaging modalities, and advances in analytics and navigation tools. The aim of this paper is to present the basic elements of CAOS devices and to review state-of-the-art examples of different imaging modalities used to create the virtual representations, of different position tracking devices for navigation systems, of different surgical robots, of different methods for registration and referencing, and of CAOS modules that have been realized for different surgical procedures. Future perspectives will also be outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a preliminary version of Chapter 3 of a State-of-the-Art Report by the IASS Working Group 5: Concrete Shell Roofs. The intention of this chapter is to set forth for those who intend to design concrete shell roofs information and advice about the selection, verification and utilization of commercial computer tools for analysis and design tasks.The computer analysis and design steps for a concrete shell roof are described. Advice follows on the aspects to be considered in the application of commercial finite element (FE)computer programs to concrete shell analysis, starting with recommendations on how novices can gain confidence and competence in the use of software. To establish vocabulary and provide background references, brief surveys are presented of, first,element types and formulations for shells and, second, challenges presented by advanced analyses of shells. The final section of the chapter indicates what capabilities to seek in selecting commercial FE software for the analysis and design of concrete shell roofs. Brief concluding remarks summarize advice regarding judicious use of computer analysis in design practice.