916 resultados para computer forensics, digital evidence, computer profiling, time-lining, temporal inconsistency, computer forensic object model


Relevância:

50.00% 50.00%

Publicador:

Resumo:

A presente pesquisa trata o projeto e análise de uma antena monopolo planar com geometria modificada visando sua utilização para recepção do sinal de TV digital operante no Brasil na faixa de 470 MHz a 806 MHz. Faixa essa contida no espectro de UHF – Ultra High Frequency (300 MHz a 3 GHz). Para desenvolvimento desse trabalho foi tomado como referência à antena denominada “The Hi Monopole”. Que originalmente foi apresentada para operar em sistemas UWB (Ultra Wide Band) em 3,1 a 10,6 GHz. Para o desenvolvimento do trabalho proposto, diferentes técnicas de adequação da antena podem ser utilizadas para operação em banda larga, tais como: modificação na estrutura da antena, carregamento resistivo, chaveamento, utilização de elementos parasitas e estruturas de casamento. O projeto de antenas banda larga pode ser realizado a partir de três abordagens diferentes: domínio do tempo, domínio da frequência e método de expansão por singularidades. O método no domínio da frequência foi empregado neste trabalho para o projeto da antena proposta, algumas das técnicas supracitadas foram analisadas almejando o aumento da largura de banda, sendo confeccionado um protótipo da antena para validar os conceitos empregados. A antena foi então projetada para a faixa de 470 MHz a 890 MHz. O protótipo construído para essa mesma faixa apresentou bons resultados, o que valida à técnica empregada. Aspectos positivos e negativos do uso desta técnica são discutidos ao longo do trabalho. O programa computacional comercial CST® MICROWAVE STUDIO, baseado na Técnica da Integração Finita (FIT), foi usado para simulações no domínio da frequência.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

With the development of Digital TV, the equipments are becoming more and more modernized in order to popular- ize the information that soon might reach all Brazilian families. That way, we open a space for discussion about the many directions that the usability applied on ISDB-Tb interactivity (Brazilian System of Digital Television) can take. This paper approaches the questions connected to the concept of usability and also the subjects related to the life cycle of some technologies (existence time, obsolescence) Also talks with the definition of interactivityon Digital Television since it is responsible for the emergence of a new contingent of interacting people which goes from the computer and portable equipments users to the passive TV viewers. It’s possible to conclude that the Human-Digital TV Interaction (HDTVI) comprehends the synergy between three actants on Digital TV: the col- lective (or not) TV viewer; the interface and the issuer who can be represented by an Artificial Intelligence (AI) service.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The aim of this study was to evaluate whether digitized images obtained from periapical radiographs taken with low dose of radiation could be improved with the aid of a computer software (PhotoStyler) for digital treatrnent. Serial and standardized radiographs of molar and premolar areas were studied. A total of 57 images equivalent to the radiographs taken with reduced exposure time ( 60 and 80% of the time considered normal), digitized and treated, were submitted to the evaluation of seven exanúners which compared them with those images without treatment. lt was verified that about 80% of the images equivalem to lhe radiographs taken with 60% reduction of ordinary exposure time were considered to having quality for supporting diagnosis. As for the images taken with 80% reduction of ordinary exposure time, about 50% of them were considered suitable for the sarne purpose

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Time correlation functions of current fluctuations were calculated by molecular dynamics (MD) simulations in order to investigate sound waves of high wavevectors in the glass-forming liquid Ca(NO3)(2)center dot 4H(2)O. Dispersion curves, omega(k), were obtained for longitudinal (LA) and transverse acoustic (TA) modes, and also for longitudinal optic (LO) modes. Spectra of LA modes calculated by MD simulations were modeled by a viscoelastic model within the memory function framework. The viscoelastic model is used to rationalize the change of slope taking place at k similar to 0.3 angstrom(-1) in the omega(k) curve of acoustic modes. For still larger wavevectors, mixing of acoustic and optic modes is observed. Partial time correlation functions of longitudinal mass currents were calculated separately for the ions and the water molecules. The wavevector dependence of excitation energies of the corresponding partial LA modes indicates the coexistence of a relatively stiff subsystem made of cations and anions, and a softer subsystem made of water molecules. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4751548]

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in Sao Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p < 0.05). The operators were mainly female and young (from 15 to 24 years old). The call center was opened 24 hours and the operators weekly hours were 36 hours with break time from 21 to 35 minutes per day. The symptoms reported were eye fatigue (73.9%), "weight" in the eyes (68.2%), "burning" eyes (54.6%), tearing (43.9%) and weakening of vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The main problem connected to cone beam computed tomography (CT) systems for industrial applications employing 450 kV X-ray tubes is the high amount of scattered radiation which is added to the primary radiation (signal). This stray radiation leads to a significant degradation of the image quality. A better understanding of the scattering and methods to reduce its effects are therefore necessary to improve the image quality. Several studies have been carried out in the medical field at lower energies, whereas studies in industrial CT, especially for energies up to 450 kV, are lacking. Moreover, the studies reported in literature do not consider the scattered radiation generated by the CT system structure and the walls of the X-ray room (environmental scatter). In order to investigate the scattering on CT projections a GEANT4-based Monte Carlo (MC) model was developed. The model, which has been validated against experimental data, has enabled the calculation of the scattering including the environmental scatter, the optimization of an anti-scatter grid suitable for the CT system, and the optimization of the hardware components of the CT system. The investigation of multiple scattering in the CT projections showed that its contribution is 2.3 times the one of primary radiation for certain objects. The results of the environmental scatter showed that it is the major component of the scattering for aluminum box objects of front size 70 x 70 mm2 and that it strongly depends on the thickness of the object and therefore on the projection. For that reason, its correction is one of the key factors for achieving high quality images. The anti-scatter grid optimized by means of the developed MC model was found to reduce the scatter-toprimary ratio in the reconstructed images by 20 %. The object and environmental scatter calculated by means of the simulation were used to improve the scatter correction algorithm which could be patented by Empa. The results showed that the cupping effect in the corrected image is strongly reduced. The developed CT simulation is a powerful tool to optimize the design of the CT system and to evaluate the contribution of the scattered radiation to the image. Besides, it has offered a basis for a new scatter correction approach by which it has been possible to achieve images with the same spatial resolution as state-of-the-art well collimated fan-beam CT with a gain in the reconstruction time of a factor 10. This result has a high economic impact in non-destructive testing and evaluation, and reverse engineering.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This work presents algorithms for the calculation of the electrostatic interaction in partially periodic systems. The framework for these algorithms is provided by the simulation package ESPResSo, of which the author was one of the main developers. The prominent features of the program are listed and the internal structure is described. In the following, algorithms for the calculation of the Coulomb sum in three dimensionally periodic systems are described. These methods are the foundations for the algorithms for partially periodic systems presented in this work. Starting from the MMM2D method for systems with one non-periodic coordinate, the ELC method for these systems is developed. This method consists of a correction term which allows to use methods for three dimensional periodicity also for the case of two periodic coordinates. The computation time of this correction term is neglible for large numbers of particles. The performance of MMM2D and ELC are demonstrated by results from the implementations contained in ESPResSo. It is also discussed, how different dielectric constants inside and outside of the simulation box can be realized. For systems with one periodic coordinate, the MMM1D method is derived from the MMM2D method. This method is applied to the problem of the attraction of like-charged rods in the presence of counterions, and results of the strong coupling theory for the equilibrium distance of the rods at infinite counterion-coupling are checked against results from computer simulations. The degree of agreement between the simulations at finite coupling and the theory can be characterized by a single parameter gamma_RB. In the special case of T=0, one finds under certain circumstances flat configurations, in which all charges are located in the rod-rod plane. The energetically optimal configuration and its stability are determined analytically, which depends on only one parameter gamma_z, similar to gamma_RB. These findings are in good agreement with results from computer simulations.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Eine der offenen Fragen der aktuellen Physik ist das Verständnis von Systemen im Nichtgleichgewicht. Im Gegensatz zu der Gleichgewichtsphysik ist in diesem Bereich aktuell kein Formalismus bekannt der ein systematisches Beschreiben der unterschiedlichen Systeme ermöglicht. Um das Verständnis über diese Systeme zu vergrößern werden in dieser Arbeit zwei unterschiedliche Systeme studiert, die unter einem externen Feld ein starkes nichtlineares Verhalten zeigen. Hierbei handelt es sich zum einen um das Verhalten von Teilchen unter dem Einfluss einer extern angelegten Kraft und zum anderen um das Verhalten eines Systems in der Nähe des kritischen Punktes unter Scherung. Das Modellsystem in dem ersten Teil der Arbeit ist eine binäre Yukawa Mischung, die bei tiefen Temperaturen einen Glassübergang zeigt. Dies führt zu einer stark ansteigenden Relaxationszeit des Systems, so dass man auch bei kleinen Kräften relativ schnell ein nichtlineares Verhalten beobachtet. In Abhängigkeit der angelegten konstanten Kraft können in dieser Arbeit drei Regime, mit stark unterschiedlichem Teilchenverhalten, identifiziert werden. In dem zweiten Teil der Arbeit wird das Ising-Modell unter Scherung betrachtet. In der Nähe des kritischen Punkts kommt es in diesem Modell zu einer Beeinflussung der Fluktuationen in dem System durch das angelegte Scherfeld. Dies hat zur Folge, dass das System stark anisotrop wird und man zwei unterschiedliche Korrelationslängen vorfindet, die mit unterschiedlichen Exponenten divergieren. Infolgedessen lässt sich der normale isotrope Formalismus des "finite-size scaling" nicht mehr auf dieses System anwenden. In dieser Arbeit wird gezeigt, wie dieser auf den anisotropen Fall zu verallgemeinern ist und wie damit die kritischen Punkte, sowie die dazu gehörenden kritischen Exponenten berechnet werden können.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Percutaneous nephrolithotomy (PCNL) for the treatment of renal stones and other related renal diseases has proved its efficacy and has stood the test of time compared with open surgical methods and extracorporal shock wave lithotripsy. However, access to the collecting system of the kidney is not easy because the available intra-operative image modalities only provide a two dimensional view of the surgical scenario. With this lack of visual information, several punctures are often necessary which, increases the risk of renal bleeding, splanchnic, vascular or pulmonary injury, or damage to the collecting system which sometimes makes the continuation of the procedure impossible. In order to address this problem, this paper proposes a workflow for introduction of a stereotactic needle guidance system for PCNL procedures. An analysis of the imposed clinical requirements, and a instrument guidance approach to provide the physician with a more intuitive planning and visual guidance to access the collecting system of the kidney are presented.