912 resultados para Unified Forensic Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’agression sexuelle (AS) commise envers les enfants est un sujet complexe à enquêter et les allégations reposent souvent exclusivement sur le témoignage de l’enfant. Cependant, même quand l’enfant divulgue une AS, il peut être réticent à révéler certains détails personnels et gênants de l’AS à un étranger. Étant donné qu’il n'est pas toujours possible d'obtenir le consentement de filmer et qu’il est relativement difficile de mesurer l’attitude non verbale de l’enfant et celui de l’enquêteur au cours des entrevues d’investigations, cette recherche a été novatrice dans sa création d’échelles verbales de telles attitudes. Afin de déterminer la corrélation de l’attitude des enquêteurs et la collaboration des enfants, 90 entrevues d’enfants âgés de 4 à 13 ans ont été analysées. Les entrevues ont été enregistrées sur bande audio, transcrites et codifiées à l'aide des sous-échelles verbales d'attitudes soutenantes et non-soutenantes des enquêteurs ainsi que d’attitudes de résistance et de coopération de la part de l'enfant. La proportion des détails sur l’AS fournie par les enfants a également été calculée. Afin de comparer les entrevues avec et sans le protocole du National Institute of Child Health and Human Development (NICHD), une MANCOVA, contrôlant pour l’âge de l’enfant et la proportion de questions ouvertes, démontre tel qu’attendu que les entrevues avec le protocole obtiennent plus de détails fournis à la suite des questions ouvertes que les entrevues sans le protocole. Cependant, aucune différence ne ressort quant aux attitudes de l’enfant et celle de l’enquêteur. Afin de trouver le meilleur prédicteur de la quantité de détails dévoilés par les enfants, une analyse de régression multiple hiérarchique a été faite. Après avoir contrôlé pour l'âge de l’enfant, l’utilisation du protocole et la proportion de questions ouvertes, la résistance de l’enfant et l’attitude non-soutenante de l’enquêteur expliquent 28 % supplémentaire de la variance, tandis que la variance totale expliquée par le modèle est de 58%. De plus, afin de déterminer si la collaboration de l’enfant et l’attitude de l’enquêteur varient en fonction de l’âge des enfants, une MANOVA démontre que les enquêteurs se comportent similairement, quel que soit l'âge des enfants. Ceci, malgré le fait que les jeunes enfants sont généralement plus réticents et coopèrent significativement moins bien que les préadolescents. Finalement, une régression multiple hiérarchique démontre que le soutien de l'enquêteur est le meilleur prédicteur de la collaboration des enfants, au-delà des caractéristiques de l'enfant et de l’AS. Bien que l’utilisation du protocole NICHD ait permis des progrès considérables dans la manière d’interroger les enfants, augmentant la proportion de détails obtenus par des questions ouvertes/rappel libre et amplifiant la crédibilité du témoignage, l’adhésion au protocole n’est pas en soi suffisante pour convaincre des jeunes enfants de parler en détail d’une AS à un inconnu. Les résultats de cette thèse ont une valeur scientifique et contribuent à enrichir les connaissances théoriques sur les attitudes de l'enfant et de l'enquêteur exprimées lors des entrevues. Même si les enquêteurs de cette étude offrent plus de soutien aux enfants résistants, indépendamment de leur âge, pour promouvoir la divulgation détaillée de l’AS, de meilleures façons de contrer les attitudes de résistance exprimées par les jeunes enfants et une minimisation des attitudes non-soutenantes lors des entrevues sont nécessaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les logiciels sont de plus en plus complexes et leur développement est souvent fait par des équipes dispersées et changeantes. Par ailleurs, de nos jours, la majorité des logiciels sont recyclés au lieu d’être développés à partir de zéro. La tâche de compréhension, inhérente aux tâches de maintenance, consiste à analyser plusieurs dimensions du logiciel en parallèle. La dimension temps intervient à deux niveaux dans le logiciel : il change durant son évolution et durant son exécution. Ces changements prennent un sens particulier quand ils sont analysés avec d’autres dimensions du logiciel. L’analyse de données multidimensionnelles est un problème difficile à résoudre. Cependant, certaines méthodes permettent de contourner cette difficulté. Ainsi, les approches semi-automatiques, comme la visualisation du logiciel, permettent à l’usager d’intervenir durant l’analyse pour explorer et guider la recherche d’informations. Dans une première étape de la thèse, nous appliquons des techniques de visualisation pour mieux comprendre la dynamique des logiciels pendant l’évolution et l’exécution. Les changements dans le temps sont représentés par des heat maps. Ainsi, nous utilisons la même représentation graphique pour visualiser les changements pendant l’évolution et ceux pendant l’exécution. Une autre catégorie d’approches, qui permettent de comprendre certains aspects dynamiques du logiciel, concerne l’utilisation d’heuristiques. Dans une seconde étape de la thèse, nous nous intéressons à l’identification des phases pendant l’évolution ou pendant l’exécution en utilisant la même approche. Dans ce contexte, la prémisse est qu’il existe une cohérence inhérente dans les évènements, qui permet d’isoler des sous-ensembles comme des phases. Cette hypothèse de cohérence est ensuite définie spécifiquement pour les évènements de changements de code (évolution) ou de changements d’état (exécution). L’objectif de la thèse est d’étudier l’unification de ces deux dimensions du temps que sont l’évolution et l’exécution. Ceci s’inscrit dans notre volonté de rapprocher les deux domaines de recherche qui s’intéressent à une même catégorie de problèmes, mais selon deux perspectives différentes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La réflexion est considérée comme un élément significatif de la pédagogie et de la pratique médicales sans qu’il n’existe de consensus sur sa définition ou sur sa modélisation. Comme la réflexion prend concurremment plusieurs sens, elle est difficile à opérationnaliser. Une définition et un modèle standard sont requis afin d’améliorer le développement d’applications pratiques de la réflexion. Dans ce mémoire, nous identifions, explorons et analysons thématiquement les conceptualisations les plus influentes de la réflexion, et développons de nouveaux modèle et définition. La réflexion est définie comme le processus de s’engager (le « soi » (S)) dans des interactions attentives, critiques, exploratoires et itératives (ACEI) avec ses pensées et ses actions (PA), leurs cadres conceptuels sous-jacents (CC), en visant à les changer et en examinant le changement lui-même (VC). Notre modèle conceptuel comprend les cinq composantes internes de la réflexion et les éléments extrinsèques qui l’influencent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples -- in particular the regression problem of approximating a multivariate function from sparse data. We present both formulations in a unified framework, namely in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By using suitable parameters, we present a uni¯ed aproach for describing four methods for representing categorical data in a contingency table. These methods include: correspondence analysis (CA), the alternative approach using Hellinger distance (HD), the log-ratio (LR) alternative, which is appropriate for compositional data, and the so-called non-symmetrical correspondence analysis (NSCA). We then make an appropriate comparison among these four methods and some illustrative examples are given. Some approaches based on cumulative frequencies are also linked and studied using matrices. Key words: Correspondence analysis, Hellinger distance, Non-symmetrical correspondence analysis, log-ratio analysis, Taguchi inertia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis theoretically studies the relationship between the informal sector (both in the labor and the housing market) and the city structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristics of convectively-generated gravity waves during an episode of deep convection near the coast of Wales are examined in both high resolution mesoscale simulations [with the (UK) Met Oce Unified Model] and in observations from a Mesosphere-Stratosphere-Troposphere (MST) wind profiling Doppler radar. Deep convection reached the tropopause and generated vertically propagating, high frequency waves in the lower stratosphere that produced vertical velocity perturbations O(1 m/s). Wavelet analysis is applied in order to determine the characteristic periods and wavelengths of the waves. In both the simulations and observations, the wavelet spectra contain several distinct preferred scales indicated by multiple spectral peaks. The peaks are most pronounced in the horizontal spectra at several wavelengths less than 50 km. Although these peaks are most clear and of largest amplitude in the highest resolution simulations (with 1 km horizontal grid length), they are also evident in coarser simulations (with 4 km horizontal grid length). Peaks also exist in the vertical and temporal spectra (between approximately 2.5 and 4.5 km, and 10 to 30 minutes, respectively) with good agreement between simulation and observation. Two-dimensional (wavenumber-frequency) spectra demonstrate that each of the selected horizontal scales contains peaks at each of preferred temporal scales revealed by the one- dimensional spectra alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mites can be found in all imaginable terrestrial habitats, in freshwater, and in salt water. Mites can be found in our houses and furnishings, on our clothes, and even in the pores of our skin-almost every single person carries mites. Most of the time, we are unaware of them because they are small and easily overlooked, and-most of the time-they do not cause trouble. In fact, they may even proof useful, for instance in forensics. The first arthropod scavengers colonising a dead body will be flies with phoretic mites. The flies will complete their life cycle in and around the corpse, while the mites may feed on the immature stages of the flies. The mites will reproduce much faster than their carriers, offering themselves as valuable timeline markers. There are environments where insects are absent or rare or the environmental conditions impede their access to the corpse. Here, mites that are already present and mites that arrive walking, through air currents or material transfer become important. At the end of the ninetieth century, the work of Jean Pierre M,gnin became the starting point of forensic acarology. M,gnin documented his observations in 'La Faune des Cadavres' [The Fauna of Carcasses]. He was the first to list eight distinct waves of arthropods colonising human carcasses. The first wave included flies and mites, the sixth wave was composed of mites exclusively. The scope of forensic acarology goes further than mites as indicators of time of death. Mites are micro-habitat specific and might provide evidential data on movement or relocation of bodies, or locating a suspect at the scene of a crime. Because of their high diversity, wide occurrence, and abundance, mites may be of great value in the analysis of trace evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a unified data modeling approach that is equally applicable to supervised regression and classification applications, as well as to unsupervised probability density function estimation. A particle swarm optimization (PSO) aided orthogonal forward regression (OFR) algorithm based on leave-one-out (LOO) criteria is developed to construct parsimonious radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines the center vector and diagonal covariance matrix of one RBF node by minimizing the LOO statistics. For regression applications, the LOO criterion is chosen to be the LOO mean square error, while the LOO misclassification rate is adopted in two-class classification applications. By adopting the Parzen window estimate as the desired response, the unsupervised density estimation problem is transformed into a constrained regression problem. This PSO aided OFR algorithm for tunable-node RBF networks is capable of constructing very parsimonious RBF models that generalize well, and our analysis and experimental results demonstrate that the algorithm is computationally even simpler than the efficient regularization assisted orthogonal least square algorithm based on LOO criteria for selecting fixed-node RBF models. Another significant advantage of the proposed learning procedure is that it does not have learning hyperparameters that have to be tuned using costly cross validation. The effectiveness of the proposed PSO aided OFR construction procedure is illustrated using several examples taken from regression and classification, as well as density estimation applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study details validation of two separate multiplex STR systems for use in paternity investigations. These are the Second Generation Multiplex (SGM) developed by the UK Forensic Science Service and the PowerPlex 1 multiplex commercially available from Promega Inc. (Madison, WI, USA). These multiplexes contain 12 different STR systems (two are duplicated in the two systems). Population databases from Caucasian, Asian and Afro-Caribbean populations have been compiled for all loci. In all but two of the 36 STR/ethnic group combinations, no evidence was obtained to indicate inconsistency with Hardy-Weinberg (HW) proportions. Empirical and theoretical approaches have been taken to validate these systems for paternity testing. Samples from 121 cases of disputed paternity were analysed using established Single Locus Probe (SLP) tests currently in use, and also using the two multiplex STR systems. Results of all three test systems were compared and no non-conformities in the conclusions were observed, although four examples of apparent germ line mutations in the STR systems were identified. The data was analysed to give information on expected paternity indices and exclusion rates for these STR systems. The 12 systems combined comprise a highly discriminating test suitable for paternity testing. 99.96% of non-fathers are excluded from paternity on two or more STR systems. Where no exclusion is found, Paternity Index (PI) values of > 10,000 are expected in > 96% of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a unified framework for a range of linear transforms that can be used for the analysis of terahertz spectroscopic data, with particular emphasis on their application to the measurement of leaf water content. The use of linear transforms for filtering, regression, and classification is discussed. For illustration, a classification problem involving leaves at three stages of drought and a prediction problem involving simulated spectra are presented. Issues resulting from scaling the data set are discussed. Using Lagrange multipliers, we arrive at the transform that yields the maximum separation between the spectra and show that this optimal transform is equivalent to computing the Euclidean distance between the samples. The optimal linear transform is compared with the average for all the spectra as well as with the Karhunen–Loève transform to discriminate a wet leaf from a dry leaf. We show that taking several principal components into account is equivalent to defining new axes in which data are to be analyzed. The procedure shows that the coefficients of the Karhunen–Loève transform are well suited to the process of classification of spectra. This is in line with expectations, as these coefficients are built from the statistical properties of the data set analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.