896 resultados para computer forensics, digital evidence, computer profiling, time-lining, temporal inconsistency, computer forensic object model
Resumo:
Decision-making in an uncertain environment is driven by two major needs: exploring the environment to gather information or exploiting acquired knowledge to maximize reward. The neural processes underlying exploratory decision-making have been mainly studied by means of functional magnetic resonance imaging, overlooking any information about the time when decisions are made. Here, we carried out an electroencephalography (EEG) experiment, in order to detect the time when the brain generators responsible for these decisions have been sufficiently activated to lead to the next decision. Our analyses, based on a classification scheme, extract time-unlocked voltage topographies during reward presentation and use them to predict the type of decisions made on the subsequent trial. Classification accuracy, measured as the area under the Receiver Operator's Characteristic curve was on average 0.65 across 7 subjects. Classification accuracy was above chance levels already after 516 ms on average, across subjects. We speculate that decisions were already made before this critical period, as confirmed by a positive correlation with reaction times across subjects. On an individual subject basis, distributed source estimations were performed on the extracted topographies to statistically evaluate the neural correlates of decision-making. For trials leading to exploration, there was significantly higher activity in dorsolateral prefrontal cortex and the right supramarginal gyrus; areas responsible for modulating behavior under risk and deduction. No area was more active during exploitation. We show for the first time the temporal evolution of differential patterns of brain activation in an exploratory decision-making task on a single-trial basis.
Resumo:
This thesis investigates the effectiveness of time-varying hedging during the financial crisis of 2007 and the European Debt Crisis of 2010. In addition, the seven test economies are part of the European Monetary Union and these countries are in different economical states. Time-varying hedge ratio was constructed using conditional variances and correlations, which were created by using multivariate GARCH models. Here we have used three different underlying portfolios: national equity markets, government bond markets and the combination of these two. These underlying portfolios were hedged by using credit default swaps. Empirical part includes the in-sample and out-of-sample analysis, which are constructed by using constant and dynamic models. Moreover, almost in every case dynamic models outperform the constant ones in the determination of the hedge ratio. We could not find any statistically significant evidence to support the use of asymmetric dynamic conditional correlation model. In addition, our findings are in line with prior literature and support the use of time-varying hedge ratio. Finally, we found that in some cases credit default swaps are not suitable instruments for hedging and they act more as a speculative instrument.
Resumo:
Kuluttajat käyttävät sisältöpohjaisia digitaalisia palveluita jatkuvasti saadakseen lisää tietoa terveydestään. Samalla he arvioivat käyttämiensä palveluiden laatua. Jotta yritykset voisivat suunnitella ja tarjota parhaita mahdollisia digitaalisia palveluita kuluttajille, yritysten tulisi tunnistaa ja analysoida kuluttajien kokemuksia ja käyttötarkoituksia heidän palveluissaan. Tämän tutkimuksen tarkoituksena on kuvailla kuluttajien näkemyksiä Masennusinfo.fi:stä, joka on sisältöpohjainen digitaalinen palvelu, ja joka tarjoaa käyttäjilleen tietoa masennuksesta. Päämääränä on selvittää, kuinka kuluttajat kokevat lääkeyrityksen tarjoaman palvelun laadun ja mihin tarkoituksiin sitä käytetään. Tutkimuksen tarkoitus voidaan jakaa kolmeen osa-ongelmaan: Mihin tarkoituksiin kuluttajat käyttävät sisältöpohjaisia digitaalisia palveluita? Miten kuluttajat kokevat näiden palveluiden laadun? Kuinka käyttötarkoitus ja koettu laatu eroavat eri käyttäjäryhmissä? Tutkimus toteutetaan web-pohjaisella kyselytutkimuksella. Mittarit tehdään teoreettisen viitekehyksen pohjalta, joka perustuu aikaisempaan tutkimukseen. Tutkimuksen empiirinen osuus suoritetaan pop-up tutkimuksella, joka sijoitetaan tutkittavalle sivustolle antaen näin kaikille palvelun käyttäjille mahdollisuuden vastata kyselyyn. Tulokset osoittavat, että palvelua käyttävät suurimmaksi osaksi naiset, suhteellisen nuoret 16−29- vuotiaat, tai yli keski-ikäiset 50−65-vuotiaat henkilöt, jotka ovat joko työssäkäyviä tai opiskelijoita ja korkeasti koulutettuja. Masennusinfo.fi nähdään laadukkaana palveluna kaikissa käyttäjäryhmissä sekä sen käytettävyyden että sisällön perusteella. Käyttötarkoituksetkin ovat jokseenkin samankaltaisia eri käyttäjäryhmissä. Yleensä palvelua käytetään tiedon hakemiseen sairauden alkuvaiheessa. Löydöksien perusteella esitetään, että palvelua muokataan vastaamaan yhä paremmin sen käyttötarkoituksia ja tyypillistä käyttäjäprofiilia. Koska muutamia pieniä eroja käyttäjäryhmien näkemyksissä havaittiin, palveluiden tuottaja päättää, minkä ryhmän mieltymyksiä se noudattaa.
Resumo:
L’observation de l’exécution d’applications JavaScript est habituellement réalisée en instrumentant une machine virtuelle (MV) industrielle ou en effectuant une traduction source-à-source ad hoc et complexe. Ce mémoire présente une alternative basée sur la superposition de machines virtuelles. Notre approche consiste à faire une traduction source-à-source d’un programme pendant son exécution pour exposer ses opérations de bas niveau au travers d’un modèle objet flexible. Ces opérations de bas niveau peuvent ensuite être redéfinies pendant l’exécution pour pouvoir en faire l’observation. Pour limiter la pénalité en performance introduite, notre approche exploite les opérations rapides originales de la MV sous-jacente, lorsque cela est possible, et applique les techniques de compilation à-la-volée dans la MV superposée. Notre implémentation, Photon, est en moyenne 19% plus rapide qu’un interprète moderne, et entre 19× et 56× plus lente en moyenne que les compilateurs à-la-volée utilisés dans les navigateurs web populaires. Ce mémoire montre donc que la superposition de machines virtuelles est une technique alternative compétitive à la modification d’un interprète moderne pour JavaScript lorsqu’appliqué à l’observation à l’exécution des opérations sur les objets et des appels de fonction.
Resumo:
Sweden’s recent report on Urban Sustainable Development calls out a missing link between the urban design process and citizens. This paper investigates if engaging citizens as design agents by providing a platform for alternate participation can bridge this gap, through the transfer of spatial agency and new modes of critical cartography. To assess whether this is the case, the approaches are applied to Stockholm’s urban agriculture movement in a staged intervention. The aim of the intervention was to engage citizens in locating existing and potential places for growing food and in gathering information from these sites to inform design in urban agriculture. The design-based methodologies incorporated digital and bodily interfaces for this cartography to take place. The Urban CoMapper, a smartphone digital app, captured real-time perspectives through crowd-sourced mapping. In the bodily cartography, participant’s used their bodies to trace the site and reveal their sensorial perceptions. The data gathered from these approaches gave way to a mode of artistic research for exploring urban agriculture, along with inviting artists to be engaged in the dialogues. In sum, results showed that a combination of digital and bodily approaches was necessary for a critical cartography if we want to engage citizens holistically into the urban design process as spatial agents informing urban policy. Such methodologies formed a reflective interrogation and encouraged a new intimacy with nature, in this instance, one that can transform our urban conduct by questioning our eating habits: where we get our food from and how we eat it seasonally.
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
Many evolutionary algorithm applications involve either fitness functions with high time complexity or large dimensionality (hence very many fitness evaluations will typically be needed) or both. In such circumstances, there is a dire need to tune various features of the algorithm well so that performance and time savings are optimized. However, these are precisely the circumstances in which prior tuning is very costly in time and resources. There is hence a need for methods which enable fast prior tuning in such cases. We describe a candidate technique for this purpose, in which we model a landscape as a finite state machine, inferred from preliminary sampling runs. In prior algorithm-tuning trials, we can replace the 'real' landscape with the model, enabling extremely fast tuning, saving far more time than was required to infer the model. Preliminary results indicate much promise, though much work needs to be done to establish various aspects of the conditions under which it can be most beneficially used. A main limitation of the method as described here is a restriction to mutation-only algorithms, but there are various ways to address this and other limitations.
Resumo:
The combination of model predictive control based on linear models (MPC) with feedback linearization (FL) has attracted interest for a number of years, giving rise to MPC+FL control schemes. An important advantage of such schemes is that feedback linearizable plants can be controlled with a linear predictive controller with a fixed model. Handling input constraints within such schemes is difficult since simple bound contraints on the input become state dependent because of the nonlinear transformation introduced by feedback linearization. This paper introduces a technique for handling input constraints within a real time MPC/FL scheme, where the plant model employed is a class of dynamic neural networks. The technique is based on a simple affine transformation of the feasible area. A simulated case study is presented to illustrate the use and benefits of the technique.
Resumo:
The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
We study by numerical simulations the time correlation function of a stochastic lattice model describing the dynamics of coexistence of two interacting biological species that present time cycles in the number of species individuals. Its asymptotic behavior is shown to decrease in time as a sinusoidal exponential function from which we extract the dominant eigenvalue of the evolution operator related to the stochastic dynamics showing that it is complex with the imaginary part being the frequency of the population cycles. The transition from the oscillatory to the nonoscillatory behavior occurs when the asymptotic behavior of the time correlation function becomes a pure exponential, that is, when the real part of the complex eigenvalue equals a real eigenvalue. We also show that the amplitude of the undamped oscillations increases with the square root of the area of the habitat as ordinary random fluctuations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The demands of image processing related systems are robustness, high recognition rates, capability to handle incomplete digital information, and magnanimous flexibility in capturing shape of an object in an image. It is exactly here that, the role of convex hulls comes to play. The objective of this paper is twofold. First, we summarize the state of the art in computational convex hull development for researchers interested in using convex hull image processing to build their intuition, or generate nontrivial models. Secondly, we present several applications involving convex hulls in image processing related tasks. By this, we have striven to show researchers the rich and varied set of applications they can contribute to. This paper also makes a humble effort to enthuse prospective researchers in this area. We hope that the resulting awareness will result in new advances for specific image recognition applications.
Resumo:
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the H ∞ state-feedback control design problem of discretetime Markov jump linear systems. First, under the assumption that the Markov parameter is measured, the main contribution is on the LMI characterization of all linear feedback controllers such that the closed loop output remains bounded by a given norm level. This results allows the robust controller design to deal with convex bounded parameter uncertainty, probability uncertainty and cluster availability of the Markov mode. For partly unknown transition probabilities, the proposed design problem is proved to be less conservative than one available in the current literature. An example is solved for illustration and comparisons. © 2011 IFAC.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)