24 resultados para Constraints-Led Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whether for investigative or intelligence aims, crime analysts often face up the necessity to analyse the spatiotemporal distribution of crimes or traces left by suspects. This article presents a visualisation methodology supporting recurrent practical analytical tasks such as the detection of crime series or the analysis of traces left by digital devices like mobile phone or GPS devices. The proposed approach has led to the development of a dedicated tool that has proven its effectiveness in real inquiries and intelligence practices. It supports a more fluent visual analysis of the collected data and may provide critical clues to support police operations as exemplified by the presented case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developmental constraints have been postulated to limit the space of feasible phenotypes and thus shape animal evolution. These constraints have been suggested to be the strongest during either early or mid-embryogenesis, which corresponds to the early conservation model or the hourglass model, respectively. Conflicting results have been reported, but in recent studies of animal transcriptomes the hourglass model has been favored. Studies usually report descriptive statistics calculated for all genes over all developmental time points. This introduces dependencies between the sets of compared genes and may lead to biased results. Here we overcome this problem using an alternative modular analysis. We used the Iterative Signature Algorithm to identify distinct modules of genes co-expressed specifically in consecutive stages of zebrafish development. We then performed a detailed comparison of several gene properties between modules, allowing for a less biased and more powerful analysis. Notably, our analysis corroborated the hourglass pattern at the regulatory level, with sequences of regulatory regions being most conserved for genes expressed in mid-development but not at the level of gene sequence, age, or expression, in contrast to some previous studies. The early conservation model was supported with gene duplication and birth that were the most rare for genes expressed in early development. Finally, for all gene properties, we observed the least conservation for genes expressed in late development or adult, consistent with both models. Overall, with the modular approach, we showed that different levels of molecular evolution follow different patterns of developmental constraints. Thus both models are valid, but with respect to different genomic features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results of a field and microstructural study between the northern and the central bodies of the Lanzo plagioclase peridotite massif (NW Italy) indicate that the spatial distribution of deformation is asymmetric across kilometre-scale mantle shear zones. The southwestern part of the shear zone (footwall) shows a gradually increasing degree of deformation from porphyroclastic peridotites to mylonite, whereas the northeastern part (hanging wall) quickly grades into weakly deformed peridotites. Discordant gabbroic and basaltic dykes are asymmetrically distributed and far more abundant in the footwall of the shear zone. The porphyroclastic peridotite displays porphyroclastic zones and domains of igneous crystallization whereas mylonites are characterized by elongated porphyroclasts, embedded between fine-grained, polycrystalline bands of olivine, plagioclase, clinopyroxene, orthopyroxene, spinel, rare titanian pargasite, and domains of recrystallized olivine. Two types of melt impregnation textures have been found: (1) clinopyroxene porphyroclasts incongruently reacted with migrating melt to form orthopyroxene plagioclase; (2) olivine porphyroclasts are partially replaced by interstitial orthopyroxene. The meltrock reaction textures tend to disappear in the mylonites, indicating that deformation in the mylonite continued under subsolidus conditions. The pyroxene chemistry is correlated with grain size. High-Al pyroxene cores indicate high temperatures (11001030C), whereas low-Al neoblasts display lower final equilibration temperatures (860C). The spinel Cr-number [molar Cr/(Cr Al)] and TiO2 concentrations show extreme variability covering almost the entire range known from abyssal peridotites. The spinel compositions of porphyroclastic peridotites from the central body are more variable than spinel from mylonite, mylonite with ultra-mylonite bands, and porphyroclastic rocks of the northern body. The spinel compositions probably indicate disequilibrium and would favour rapid cooling, and a faster exhumation of the central peridotite body, relative to the northern one. Our results indicate that melt migration and high-temperature deformation are juxtaposed both in time and space. Meltrock reaction may have caused grain-size reduction, which in turn led to localization of deformation. It is likely that melt-lubricated, actively deforming peridotites acted as melt focusing zones, with permeabilities higher than the surrounding, less deformed peridotites. Later, under subsolidus conditions, pinning in polycrystalline bands in the mylonites inhibited substantial grain growth and led to permanent weak zones in the upper mantle peridotite, with a permeability that is lower than in the weakly deformed peridotites. Such an inversion in permeability might explain why actively deforming, fine-grained peridotite mylonite acted as a permeability barrier and why ascending mafic melts might terminate and crystallize as gabbros along actively deforming shear zones. Melt-lubricated mantle shear zones provide a mechanism for explaining the discontinuous distribution of gabbros in oceancontinent transition zones, oceanic core complexes and ultraslow-spreading ridges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Forensic science is generally defined as the application of science to address questions related to the law. Too often, this view restricts the contribution of science to one single process which eventually aims at bringing individuals to court while minimising risk of miscarriage of justice. In order to go beyond this paradigm, we propose to refocus the attention towards traces themselves, as remnants of a criminal activity, and their information content. We postulate that traces contribute effectively to a wide variety of other informational processes that support decision making inmany situations. In particular, they inform actors of new policing strategies who place the treatment of information and intelligence at the centre of their systems. This contribution of forensic science to these security oriented models is still not well identified and captured. In order to create the best condition for the development of forensic intelligence, we suggest a framework that connects forensic science to intelligence-led policing (part I). Crime scene attendance and processing can be envisaged within this view. This approach gives indications abouthowto structure knowledge used by crime scene examiners in their effective practice (part II).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plants forming a rosette during their juvenile growth phase, such as Arabidopsis thaliana (L.) Heynh., are able to adjust the size, position and orientation of their leaves. These growth responses are under the control of the plants circadian clock and follow a characteristic diurnal rhythm. For instance, increased leaf elongation and hyponasty - defined here as the increase in leaf elevation angle - can be observed when plants are shaded. Shading can either be caused by a decrease in the fluence rate of photosynthetically active radiation (direct shade) or a decrease in the fluence rate of red compared with far-red radiation (neighbour detection). In this paper we report on a phenotyping approach based on laser scanning to measure the diurnal pattern of leaf hyponasty and increase in rosette size. In short days, leaves showed constitutively increased leaf elevation angles compared with long days, but the overall diurnal pattern and the magnitude of up and downward leaf movement was independent of daylength. Shade treatment led to elevated leaf angles during the first day of application, but did not affect the magnitude of up and downward leaf movement in the following day. Using our phenotyping device, individual plants can be non-invasively monitored during several days under different light conditions. Hence, it represents a proper tool to phenotype light- and circadian clock-mediated growth responses in order to better understand the underlying regulatory genetic network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An emerging therapeutic approach for Duchenne muscular dystrophy is the transplantation of autologous myogenic progenitor cells genetically modified to express dystrophin. The use of this approach is challenged by the difficulty in maintaining these cells ex vivo while keeping their myogenic potential, and ensuring sufficient transgene expression following their transplantation and myogenic differentiation in vivo. We investigated the use of the piggyBac transposon system to achieve stable gene expression when transferred to cultured mesoangioblasts and into murine muscles. Without selection, up to 8% of the mesoangioblasts expressed the transgene from 1 to 2 genomic copies of the piggyBac vector. Integration occurred mostly in intergenic genomic DNA and transgene expression was stable in vitro. Intramuscular transplantation of mouse Tibialis anterior muscles with mesoangioblasts containing the transposon led to sustained myofiber GFP expression in vivo. In contrast, the direct electroporation of the transposon-donor plasmids in the mouse Tibialis muscles in vivo did not lead to sustained transgene expression despite molecular evidence of piggyBac transposition in vivo. Together these findings provide a proof-of-principle that piggyBac transposon may be considered for mesoangioblast cell-based therapies of muscular dystrophies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production and use of false identity and travel documents in organized crime represent a serious and evolving threat. However, a case-by-case perspective, thus suffering from linkage blindness and a limited analysis capacity, essentially drives the present-day fight against this criminal problem. To assist in overcoming these limitations, a process model was developed using a forensic perspective. It guides the systematic analysis and management of seized false documents to generate forensic intelligence that supports strategic and tactical decision-making in an intelligence-led policing approach. The model is articulated on a three-level architecture that aims to assist in detecting and following-up on general trends, production methods and links between cases or series. Using analyses of a large dataset of counterfeit and forged identity and travel documents, it is possible to illustrate the model, its three levels and their contribution. Examples will point out how the proposed approach assists in detecting emerging trends, in evaluating the black market's degree of structure, in uncovering criminal networks, in monitoring the quality of false documents, and in identifying their weaknesses to orient the conception of more secured travel and identity documents. The process model proposed is thought to have a general application in forensic science and can readily be transposed to other fields of study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulation has in many cases been delegated to independent agencies, which has led to the question of how democratic accountability of these agencies is ensured. There are few empirical approaches to agency accountability. We offer such an approach, resting upon three propositions. First, we scrutinize agency accountability both de jure (accountability is ensured by formal rights of accountability 'fora' to receive information and impose consequences) and de facto (the capability of fora to use these rights depends on resources and decision costs that affect the credibility of their sanctioning capacity). Second, accountability must be evaluated separately at political, operational and managerial levels. And third, at each level accountability is enacted by a system of several (partially) interdependent fora, forming together an accountability regime. The proposed framework is applied to the case of the German Bundesnetzagentur's accountability regime, which shows its suitability for empirical purposes. Regulatory agencies are often considered as independent, yet accountable. This article provides a realistic framework for the study of accountability 'regimes' in which they are embedded. It emphasizes the need to identify the various actors (accountability fora) to which agencies are formally accountable (parliamentary committees, auditing bodies, courts, and so on) and to consider possible relationships between them. It argues that formal accountability 'on paper', as defined in official documents, does not fully account for de facto accountability, which depends on the resources possessed by the fora (mainly information-processing and decision-making capacities) and the credibility of their sanctioning capacities. The article applies this framework to the German Bundesnetzagentur.