916 resultados para computer forensics, digital evidence, computer profiling, time-lining, temporal inconsistency, computer forensic object model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: La DSA es el método de elección para el seguimiento de pacientes con aneurismas intracraneales embolizados; esta se puede asociar a complicaciones incapacitantes o mortales. La MRA se ha propuesto como método alternativo por menor costo y menos morbi-mortalidad, aunque su desempeño diagnóstico permanece en discusión debido al desarrollo de nuevos protocolos, resonadores más potentes y nuevas aplicaciones de la DSA. Metodología: Exploramos la literatura hasta la actualidad y comparamos el desempeño diagnóstico de la MRA con la DSA para detectar flujo residual posterior a la embolización terapéutica de aneurismas intracraneales. Realizamos una revisión sistemática de la literatura y meta-análisis basados en 34 artículos detectados en la búsqueda que incluyó las bases de datos PubMed, Scopus, ScIELO y BVS. Resultados: La TOF-MRA demostró sensibilidad de 86.8% (84.3%-89.1%) y especificidad de 91.2% (89%-93.1%); la SROC para TOF-MRA demostró un AUC de 0.95. El desempeño de la CE-MRA demostró sensibilidad de 88.1% (84.6%-91.1%) y especificidad de 89.1% (85.7%-91.9%); la SROC presentó una AUC de 0.93. El análisis estratificado por potencia del resonador encontró que la TOF-MRA tiene mejor desempeño con el resonador de 3T, aunque no es estadísticamente significativo. La concordancia interobservador con TOF-MRA y CE-MRA fue moderada a muy buena. Discusión: El desempeño diagnóstico de la MRA en el seguimiento de aneurismas intracraneales embolizados demostró ser bueno, con sensibilidad mayor a 84%, siendo ligeramente mejor con TOF-MRA, sin lograr reemplazar la DSA. Sin embargo, los resultados deben ser evaluados con precaución por la heterogeneidad de los resultados de los estudios incluidos. (Abreviaturas: DSA: Angiografía por Sustracción Digital; MRA: Angiografía por Resonancia Magnética; TOF-MRA: Angiorresonancia por Tiempo de Vuelo; CE-MRA: Angiorresonancia contrastada).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents evidence for several features of the population of chess players, and the distribution of their performances measured in terms of Elo ratings and by computer analysis of moves. Evidence that ratings have remained stable since the inception of the Elo system in the 1970’s is given in several forms: by showing that the population of strong players fits a simple logistic-curve model without inflation, by plotting players’ average error against the FIDE category of tournaments over time, and by skill parameters from a model that employs computer analysis keeping a nearly constant relation to Elo rating across that time. The distribution of the model’s Intrinsic Performance Ratings can hence be used to compare populations that have limited interaction, such as between players in a national chess federation and FIDE, and ascertain relative drift in their respective rating systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The level of satisfaction of computer-assisted tutorial courseware that is used as part of the undergraduate property and construction course at the University of Melbourne. It is important to determine if the computer-assisted teaching model improves the learning experience for students. This research examines the levels of satisfaction with a courseware model for teaching construction cost planning. The conclusions suggest that the advantages of the use of the model must be identified and actively supported throughout the whole course. In addition, further development of computer-based courseware is pointless unless the problems associated with their use can be minimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article explores the intersections between drama and digital gaming and the educational possibilities for literacy of both. The article draws on a model for the educational uses of digital gaming and three case studies from the Australian Research Council funded three and a half year project, Literacy in the digital world of the twenty first century: Learning from computer games. This model theorises the scope of the possibilities for literacy outcomes from the usage of computer games. The article describes how the model works, and then applies the model to drama education, specifying some new ways of thinking about the literacy outcomes from drama education. Process drama is theorised as the creation of text-in-action.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mandatory food fortification “occurs when governments legally oblige food producers to fortify particular foods or categories of foods with specified micronutrients” [1]. Because mandatory food fortification exposes everyone in the population who consumes the fortified food(s) to raised levels of the specified micronutrient, it is one of the most powerful policy instruments available to influence dietary intake and population health outcomes. Many national food regulatory authorities have established policy guidance on mandatory food fortification. From an international perspective, the Codex Alimentarius Commission (Codex) provides principles for the addition of essential nutrients to foods [2]. According to these Codex principles, mandatory food fortification with additional vitamins and minerals is recommended where there is proven public health and nutrition need (Note: these principles are under review at the time of writing).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to evaluate whether digitized images obtained from occlusal radiographs taken with low or over dose of radiation could be improved with the aid of computer software for digital treatment. Thirteen occlusal radiographs of a dry skull were taken employing 13 different exposure times. The radiographs were digitized and then manipulated with the program for image editing. 143 evaluations were performed by specialists in dental radiology who classified radiographs as appropriate or not appropriate for interpretation. Test Z was used for statistical analysis of the data and the results showed that it is possible to manipulate digitized radiographic images taken with 75% of the ideal exposure time and to make them suitable for interpretation and diagnosis. Conversely, it was concluded that the over exposed images, 57.50% above the standard exposure time, were inadequate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La prova informatica richiede l’adozione di precauzioni come in un qualsiasi altro accertamento scientifico. Si fornisce una panoramica sugli aspetti metodologici e applicativi dell’informatica forense alla luce del recente standard ISO/IEC 27037:2012 in tema di trattamento del reperto informatico nelle fasi di identificazione, raccolta, acquisizione e conservazione del dato digitale. Tali metodologie si attengono scrupolosamente alle esigenze di integrità e autenticità richieste dalle norme in materia di informatica forense, in particolare della Legge 48/2008 di ratifica della Convenzione di Budapest sul Cybercrime. In merito al reato di pedopornografia si offre una rassegna della normativa comunitaria e nazionale, ponendo l’enfasi sugli aspetti rilevanti ai fini dell’analisi forense. Rilevato che il file sharing su reti peer-to-peer è il canale sul quale maggiormente si concentra lo scambio di materiale illecito, si fornisce una panoramica dei protocolli e dei sistemi maggiormente diffusi, ponendo enfasi sulla rete eDonkey e il software eMule che trovano ampia diffusione tra gli utenti italiani. Si accenna alle problematiche che si incontrano nelle attività di indagine e di repressione del fenomeno, di competenza delle forze di polizia, per poi concentrarsi e fornire il contributo rilevante in tema di analisi forensi di sistemi informatici sequestrati a soggetti indagati (o imputati) di reato di pedopornografia: la progettazione e l’implementazione di eMuleForensic consente di svolgere in maniera estremamente precisa e rapida le operazioni di analisi degli eventi che si verificano utilizzando il software di file sharing eMule; il software è disponibile sia in rete all’url http://www.emuleforensic.com, sia come tool all’interno della distribuzione forense DEFT. Infine si fornisce una proposta di protocollo operativo per l’analisi forense di sistemi informatici coinvolti in indagini forensi di pedopornografia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable real-time kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The objective of this report was to use the Sydney Opera House as a case study of the application of Building Information Modelling (BIM). The Sydney opera House is a complex, large building with very irregular building configuration, that makes it a challenging test. A number of key concerns are evident at SOH: • the building structure is complex, and building service systems - already the major cost of ongoing maintenance - are undergoing technology change, with new computer based services becoming increasingly important. • the current “documentation” of the facility is comprised of several independent systems, some overlapping and is inadequate to service current and future services required • the building has reached a milestone age in terms of the condition and maintainability of key public areas and service systems, functionality of spaces and longer term strategic management. • many business functions such as space or event management require up-to-date information of the facility that are currently inadequately delivered, expensive and time consuming to update and deliver to customers. • major building upgrades are being planned that will put considerable strain on existing Facilities Portfolio services, and their capacity to manage them effectively While some of these concerns are unique to the House, many will be common to larger commercial and institutional portfolios. The work described here supported a complementary task which sought to identify if a building information model – an integrated building database – could be created, that would support asset & facility management functions (see Sydney Opera House – FM Exemplar Project, Report Number: 2005-001-C-4 Building Information Modelling for FM at Sydney Opera House), a business strategy that has been well demonstrated. The development of the BIMSS - Open Specification for BIM has been surprisingly straightforward. The lack of technical difficulties in converting the House’s existing conventions and standards to the new model based environment can be related to three key factors: • SOH Facilities Portfolio – the internal group responsible for asset and facility management - have already well established building and documentation policies in place. The setting and adherence to well thought out operational standards has been based on the need to create an environment that is understood by all users and that addresses the major business needs of the House. • The second factor is the nature of the IFC Model Specification used to define the BIM protocol. The IFC standard is based on building practice and nomenclature, widely used in the construction industries across the globe. For example the nomenclature of building parts – eg ifcWall, corresponds to our normal terminology, but extends the traditional drawing environment currently used for design and documentation. This demonstrates that the international IFC model accurately represents local practice for building data representation and management. • a BIM environment sets up opportunities for innovative processes that can exploit the rich data in the model and improve services and functions for the House: for example several high-level processes have been identified that could benefit from standardized Building Information Models such as maintenance processes using engineering data, business processes using scheduling, venue access, security data and benchmarking processes using building performance data. The new technology matches business needs for current and new services. The adoption of IFC compliant applications opens the way forward for shared building model collaboration and new processes, a significant new focus of the BIM standards. In summary, SOH current building standards have been successfully drafted for a BIM environment and are confidently expected to be fully developed when BIM is adopted operationally by SOH. These BIM standards and their application to the Opera House are intended as a template for other organisations to adopt for the own procurement and facility management activities. Appendices provide an overview of the IFC Integrated Object Model and an understanding IFC Model Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial research efforts have been expended to deal with the complexity of concurrent systems that is inherent to their analysis, e.g., works that tackle the well-known state space explosion problem. Approaches differ in the classes of properties that they are able to suitably check and this is largely a result of the way they balance the trade-off between analysis time and space employed to describe a concurrent system. One interesting class of properties is concerned with behavioral characteristics. These properties are conveniently expressed in terms of computations, or runs, in concurrent systems. This article introduces the theory of untanglings that exploits a particular representation of a collection of runs in a concurrent system. It is shown that a representative untangling of a bounded concurrent system can be constructed that captures all and only the behavior of the system. Representative untanglings strike a unique balance between time and space, yet provide a single model for the convenient extraction of various behavioral properties. Performance measurements in terms of construction time and size of representative untanglings with respect to the original specifications of concurrent systems, conducted on a collection of models from practice, confirm the scalability of the approach. Finally, this article demonstrates practical benefits of using representative untanglings when checking various behavioral properties of concurrent systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation seeks to define and classify potential forms of Nonlinear structure and explore the possibilities they afford for the creation of new musical works. It provides the first comprehensive framework for the discussion of Nonlinear structure in musical works and provides a detailed overview of the rise of nonlinearity in music during the 20th century. Nonlinear events are shown to emerge through significant parametrical discontinuity at the boundaries between regions of relatively strong internal cohesion. The dissertation situates Nonlinear structures in relation to linear structures and unstructured sonic phenomena and provides a means of evaluating Nonlinearity in a musical structure through the consideration of the degree to which the structure is integrated, contingent, compressible and determinate as a whole. It is proposed that Nonlinearity can be classified as a three dimensional space described by three continuums: the temporal continuum, encompassing sequential and multilinear forms of organization, the narrative continuum encompassing processual, game structure and developmental narrative forms and the referential continuum encompassing stylistic allusion, adaptation and quotation. The use of spectrograms of recorded musical works is proposed as a means of evaluating Nonlinearity in a musical work through the visual representation of parametrical divergence in pitch, duration, timbre and dynamic over time. Spectral and structural analysis of repertoire works is undertaken as part of an exploration of musical nonlinearity and the compositional and performative features that characterize it. The contribution of cultural, ideological, scientific and technological shifts to the emergence of Nonlinearity in music is discussed and a range of compositional factors that contributed to the emergence of musical Nonlinearity is examined. The evolution of notational innovations from the mobile score to the screen score is plotted and a novel framework for the discussion of these forms of musical transmission is proposed. A computer coordinated performative model is discussed, in which a computer synchronises screening of notational information, provides temporal coordination of the performers through click-tracks or similar methods and synchronises the audio processing and synthesized elements of the work. It is proposed that such a model constitutes a highly effective means of realizing complex Nonlinear structures. A creative folio comprising 29 original works that explore nonlinearity is presented, discussed and categorised utilising the proposed classifications. Spectrograms of these works are employed where appropriate to illustrate the instantiation of parametrically divergent substructures and examples of structural openness through multiple versioning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.