44 resultados para transparent conductor


Relevância:

10.00% 10.00%

Publicador:

Resumo:

General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Morphology is the aspect of language concerned with the internal structure of words. In the past decades, a large body of masked priming (behavioral and neuroimaging) data has suggested that the visual word recognition system automatically decomposes any morphologically complex word into a stem and its constituent morphemes. Yet the reliance of morphology on other reading processes (e.g., orthography and semantics), as well as its underlying neuronal mechanisms are yet to be determined. In the current magnetoencephalography study, we addressed morphology from the perspective of the unification framework, that is, by applying the Hold/Release paradigm, morphological unification was simulated via the assembly of internal morphemic units into a whole word. Trials representing real words were divided into words with a transparent (true) or a nontransparent (pseudo) morphological relationship. Morphological unification of truly suffixed words was faster and more accurate and additionally enhanced induced oscillations in the narrow gamma band (60-85 Hz, 260-440 ms) in the left posterior occipitotemporal junction. This neural signature could not be explained by a mere automatic lexical processing (i.e., stem perception), but more likely it related to a semantic access step during the morphological unification process. By demonstrating the validity of unification at the morphological level, this study contributes to the vast empirical evidence on unification across other language processes. Furthermore, we point out that morphological unification relies on the retrieval of lexical semantic associations via induced gamma band oscillations in a cerebral hub region for visual word form processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction : Les kystes intra-vitréens congénitaux sont rares et ne causent en général pas de complications oculaires. Objectifs et Méthodes : Nous rapportons le cas d'un kyste intra-vitréen non pigmenté que nous avons suivi pendant six ans pour analyser son comportement de croissance. Observation : Il s'agit d'un patient de 62 ans suivi pour un syndrome de Morbihan qui a consulté à cause d'une ombre aperçue dans le champ visuel de l'OD. L'AV était à 1,0 corrigée. L'examen à la lampe à fente a montré la présence d'un kyste transparent flottant dans le vitré. L'ultrason a confirmé l'existence d'un kyste avec un diamètre de 4,37 mm. Ce kyste était entouré d'un deuxième kyste plus grand de 7,15 mm de diamètre. Un contrôle clinique après six ans a montré que les diamètres des kystes ont augmenté à 6.20 mm pour le petit, et 7.95 mm pour le grand kyste. Les volumes des kystes ont augmenté de 43.70 mm3 à 124,97 mm3 pour le petit et de 191.39 mm3 à 263.09 mm3 pour le grand kyste. Discussion : Étant donné que les kystes n'avaient pas d'influence sur l'acuité visuelle, aucun traitement n'a été proposé. Conclusion : Un kyste intra-vitréen non pigmenté peut augmenter considérablement en volume durant six ans d'observation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urgonian-type carbonates are a characteristic feature of many late Early Cretaceous shallow-marine, tropical and subtropical environments. The presence of typical photozoan carbonate-producing communities including corals and rudists indicates the prevalence of warm, transparent and presumably oligotrophic conditions in a period otherwise characterized by the high density of globally occurring anoxic episodes. Of particular interest, therefore, is the exploration of relationships between Urgonian platform growth and palaeoceanographic change. In the French and Swiss Jura Mountains, the onset and evolution of the Urgonian platform have been controversially dated, and a correlation with other, better dated, successions is correspondingly difficult. It is for this reason that the stratigraphy and sedimentology of a series of recently exposed sections (Eclepens, Vaumarcus and Neuchatel) and, in addition, the section of the Gorges de l'Areuse were analysed. Calcareous nannofossil biostratigraphy, the evolution of phosphorus contents of bulk rock, a sequence-stratigraphic interpretation and a correlation of drowning unconformities with better dated sections in the Helvetic Alps were used to constrain the age of the Urgonian platform. The sum of the data and field observations suggests the following evolution: during the Hauterivian, important outward and upward growth of a bioclastic and oolitic carbonate platform is documented in two sequences, separated by a phase of platform drowning during the late Early Hauterivian. Following these two phases of platform growth, a second drowning phase occurred during the latest Hauterivian and Early Barremian, which was accompanied by significant platform erosion and sediment reworking. The Late Barremian witnessed the renewed installation of a carbonate platform, which initiated with a phase of oolite production, and which progressively evolved into a typical Urgonian carbonate platform colonized by corals and rudists. This phase terminated at the latest in the middle Early Aptian, due to a further drowning event. The evolution of this particular platform segment is compatible with that of more distal and well-dated segments of the same northern Tethyan platform preserved in the Helvetic zone of the Alps and in the northern subalpine chains (Chartreuse and Vercors).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION. Multimodal strategy targeted at prevention of catheter-related infection combine education to general measures of hygiene with specific guidelines for catheter insertion and dressing (1). OBJECTIVES. In this context, we tested the introduction of chlorhexidine(CHX)-impregnated sponges (2). METHODS. In our 32-beds mixed ICU, prospective surveillance of primary bacteremia and of microbiologically documented catheter-related bloodstream infections (CRBSI) is performed according to standardized definitions. New guidelines for central venous catheter (CVC) dressing combined a CHX-impregnated sponge (BioPatch_) with a transparent occlusive dressing (Tegaderm _) and planning for refection every 7 days. To contain costs, Biopatch_ was used only for internal jugular and femoral sites. Other elements of the prevention were not modified (overall compliance to hand hygiene 65-68%; non coated catheters except for burned patients [173 out of 9,542 patients];maximal sterile barriers for insertion; alcoholic solution ofCHXfor skin disinfection). RESULTS. Median monthly CVC-days increased from 710, to 749, 855 and 965 in 2006, 2007, 2008 and 2009, respectively (p\0.01). Following introduction of the new guidelines (4Q2007), the average monthly rate of infections decreased from 3.7 (95% CI: 2.6-4.8) episodes/1000 CVC-days over the 24 preceding months to 2.2 (95% CI: 1.5-2.8) over the 24 following months (p = 0.031). Dressings needed to be changed every 3-4 days. The decrease of catheter-related infections we observed in all consecutive admitted patients is comparable to that recently showed in a placeborandomized trial2. Further generalization to all CVC and arterial catheters access may be justified. CONCLUSIONS. Our data strongly suggest that combined with occlusive dressings, CHXimpregnated sponges for dressing of all CVC catheters inserted in internal jugular and/or femoral sites, significantly reduces the rate of primary bacteremia and CRBSI. REFERENCES. (1) Eggimann P, Harbarth S, Constantin MN, Touveneau S, Chevrolet JC, Pittet D. Impact of a prevention strategy targeted at vascular-access care on incidence of infections acquired in intensive care. Lancet 2000; 355:1864-1868. (2) Timsit JF, Schwebel C, Bouadma L, Geffroy A, Garrouste-Org, Pease S et al. Chlorhexidine- impregnated sponges and less frequent dressing changes for prevention of catheter-related infections in critically ill adults: a randomized controlled trial. JAMA 2009; 301(12):1231-1241.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recognition and identification processes for deceased persons. Determining the identity of deceased persons is a routine task performed essentially by police departments and forensic experts. This thesis highlights the processes necessary for the proper and transparent determination of the civil identities of deceased persons. The identity of a person is defined as the establishment of a link between that person ("the source") and information pertaining to the same individual ("identifiers"). Various identity forms could emerge, depending on the nature of the identifiers. There are two distinct types of identity, namely civil identity and biological identity. The paper examines four processes: identification by witnesses (the recognition process) and comparisons of fingerprints, dental data and DNA profiles (the identification processes). During the recognition process, the memory function is examined and helps to clarify circumstances that may give rise to errors. To make the process more rigorous, a body presentation procedure is proposed to investigators. Before examining the other processes, three general concepts specific to forensic science are considered with regard to the identification of a deceased person, namely, matter divisibility (Inman and Rudin), transfer (Locard) and uniqueness (Kirk). These concepts can be applied to the task at hand, although some require a slightly broader scope of application. A cross comparison of common forensic fields and the identification of deceased persons reveals certain differences, including 1 - reverse positioning of the source (i.e. the source is not sought from traces, but rather the identifiers are obtained from the source); 2 - the need for civil identity determination in addition to the individualisation stage; and 3 - a more restricted population (closed set), rather than an open one. For fingerprints, dental and DNA data, intravariability and intervariability are examined, as well as changes in these post mortem (PM) identifiers. Ante-mortem identifiers (AM) are located and AM-PM comparisons made. For DNA, it has been shown that direct identifiers (taken from a person whose civil identity has been alleged) tend to lead to determining civil identity whereas indirect identifiers (obtained from a close relative) direct towards a determination of biological identity. For each process, a Bayesian model is presented which includes sources of uncertainty deemed to be relevant. The results of the different processes combine to structure and summarise an overall outcome and a methodology. The modelling of dental data presents a specific difficulty with respect to intravariability, which in itself is not quantifiable. The concept of "validity" is, therefore, suggested as a possible solution to the problem. Validity uses various parameters that have an acknowledged impact on teeth intravariability. In cases where identifying deceased persons proves to be extremely difficult due to the limited discrimination of certain procedures, the use of a Bayesian approach is of great value in bringing a transparent and synthetic value. RESUME : Titre: Processus de reconnaissance et d'identification de personnes décédées. L'individualisation de personnes décédées est une tâche courante partagée principalement par des services de police, des odontologues et des laboratoires de génétique. L'objectif de cette recherche est de présenter des processus pour déterminer valablement, avec une incertitude maîtrisée, les identités civiles de personnes décédées. La notion d'identité est examinée en premier lieu. L'identité d'une personne est définie comme l'établissement d'un lien entre cette personne et des informations la concernant. Les informations en question sont désignées par le terme d'identifiants. Deux formes distinctes d'identité sont retenues: l'identité civile et l'identité biologique. Quatre processus principaux sont examinés: celui du témoignage et ceux impliquant les comparaisons d'empreintes digitales, de données dentaires et de profils d'ADN. Concernant le processus de reconnaissance, le mode de fonctionnement de la mémoire est examiné, démarche qui permet de désigner les paramètres pouvant conduire à des erreurs. Dans le but d'apporter un cadre rigoureux à ce processus, une procédure de présentation d'un corps est proposée à l'intention des enquêteurs. Avant d'entreprendre l'examen des autres processus, les concepts généraux propres aux domaines forensiques sont examinés sous l'angle particulier de l'identification de personnes décédées: la divisibilité de la matière (Inman et Rudin), le transfert (Locard) et l'unicité (Kirk). Il est constaté que ces concepts peuvent être appliqués, certains nécessitant toutefois un léger élargissement de leurs principes. Une comparaison croisée entre les domaines forensiques habituels et l'identification de personnes décédées montre des différences telles qu'un positionnement inversé de la source (la source n'est plus à rechercher en partant de traces, mais ce sont des identifiants qui sont recherchés en partant de la source), la nécessité de devoir déterminer une identité civile en plus de procéder à une individualisation ou encore une population d'intérêt limitée plutôt qu'ouverte. Pour les empreintes digitales, les dents et l'ADN, l'intra puis l'inter-variabilité sont examinées, de même que leurs modifications post-mortem (PM), la localisation des identifiants ante-mortem (AM) et les comparaisons AM-PM. Pour l'ADN, il est démontré que les identifiants directs (provenant de la personne dont l'identité civile est supposée) tendent à déterminer une identité civile alors que les identifiants indirects (provenant d'un proche parent) tendent à déterminer une identité biologique. Puis une synthèse des résultats provenant des différents processus est réalisée grâce à des modélisations bayesiennes. Pour chaque processus, une modélisation est présentée, modélisation intégrant les paramètres reconnus comme pertinents. À ce stade, une difficulté apparaît: celle de quantifier l'intra-variabilité dentaire pour laquelle il n'existe pas de règle précise. La solution préconisée est celle d'intégrer un concept de validité qui intègre divers paramètres ayant un impact connu sur l'intra-variabilité. La possibilité de formuler une valeur de synthèse par l'approche bayesienne s'avère d'une aide précieuse dans des cas très difficiles pour lesquels chacun des processus est limité en termes de potentiel discriminant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We launched a cryptoendolithic habitat, made of a gneissic impactite inoculated with Chroococcidiopsis sp., into Earth orbit. After orbiting the Earth for 16 days, the rock entered the Earth's atmosphere and was recovered in Kazakhstan. The heat of entry ablated and heated the rock to a temperature well above the upper temperature limit for life to below the depth at which light levels are insufficient for photosynthetic organisms ( approximately 5 mm), thus killing all of its photosynthetic inhabitants. This experiment shows that atmospheric transit acts as a strong biogeographical dispersal filter to the interplanetary transfer of photosynthesis. Following atmospheric entry we found that a transparent, glassy fusion crust had formed on the outside of the rock. Re-inoculated Chroococcidiopsis grew preferentially under the fusion crust in the relatively unaltered gneiss beneath. Organisms under the fusion grew approximately twice as fast as the organisms on the control rock. Thus, the biologically destructive effects of atmospheric transit can generate entirely novel and improved endolithic habitats for organisms on the destination planetary body that survive the dispersal filter. The experiment advances our understanding of how island biogeography works on the interplanetary scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Corneal integrity and transparency are indispensable for good vision. Cornea homeostasis is entirely dependent upon corneal stem cells, which are required for complex wound-healing processes that restore corneal integrity following epithelial damage. Here, we found that leucine-rich repeats and immunoglobulin-like domains 1 (LRIG1) is highly expressed in the human holoclone-type corneal epithelial stem cell population and sporadically expressed in the basal cells of ocular-surface epithelium. In murine models, LRIG1 regulated corneal epithelial cell fate during wound repair. Deletion of Lrig1 resulted in impaired stem cell recruitment following injury and promoted a cell-fate switch from transparent epithelium to keratinized skin-like epidermis, which led to corneal blindness. In addition, we determined that LRIG1 is a negative regulator of the STAT3-dependent inflammatory pathway. Inhibition of STAT3 in corneas of Lrig1-/- mice rescued pathological phenotypes and prevented corneal opacity. Additionally, transgenic mice that expressed a constitutively active form of STAT3 in the corneal epithelium had abnormal features, including corneal plaques and neovascularization similar to that found in Lrig1-/- mice. Bone marrow chimera experiments indicated that LRIG1 also coordinates the function of bone marrow-derived inflammatory cells. Together, our data indicate that LRIG1 orchestrates corneal-tissue transparency and cell fate during repair, and identify LRIG1 as a key regulator of tissue homeostasis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The athlete biological passport (ABP) was recently implemented in anti-doping work and is based on the individual and longitudinal monitoring of haematological or urine markers. These may be influenced by illicit procedures performed by some athletes with the intent to improve exercise performance. Hence the ABP is a valuable tool in the fight against doping. Actually, the passport has been defined as an individual and longitudinal observation of markers. These markers need to belong to the biological cascade influenced by the application of forbidden hormones or more generally, affected by biological manipulations which can improve the performance of the athlete. So far, the haematological and steroid profile modules of the ABP have been implemented in major sport organisations, and a further module is under development. The individual and longitudinal monitoring of some blood and urine markers are of interest, because the intraindividual variability is lower than the corresponding interindividual variability. Among the key prerequisites for the implementation of the ABP is its prospect to resist to the legal and scientific challenges. The ABP should be implemented in the most transparent way and with the necessary independence between planning, interpretation and result management of the passport. To ensure this, the Athlete Passport Management Unit (APMU) was developed and the WADA implemented different technical documents associated to the passport. This was carried out to ensure the correct implementation of a profile which can also stand the challenge of any scientific or legal criticism. This goal can be reached only by following strictly important steps in the chain of production of the results and in the management of the interpretation of the passport. Various technical documents have been then associated to the guidelines which correspond to the requirements for passport operation. The ABP has been completed very recently by the steroid profile module. As for the haematological module, individual and longitudinal monitoring have been applied and the interpretation cascade is also managed by a specific APMU in a similar way as applied in the haematological module. Thus, after exclusion of any possible pathology, specific variation from the individual norms will be then considered as a potential misuse of hormones or other modulators to enhance performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La criminalistique prend une place de plus en plus grande dans l'enquête judiciaire. Les enjeux scientifiques depuis la scène d'investigation jusqu'au procès pénal sont multiples. De nombreux intervenants sont amenés à se côtoyer : techniciens, scientifiques, médecins légistes, enquêteurs et magistrats. Des tensions sont perceptibles entre ceux-ci mais également quant à la place de la science dans le processus pénal. La raison principale de cette situation est que la prise en compte de l'indice matériel, dans l'enquête judiciaire et le procès pénal, n'est pas clairement établie. La formation des juristes et des enquêteurs ne leur permet pas de superviser les enquêtes scientifiques. Le rôle et la place des scientifiques dans l'enquête criminelle doivent être réexaminés. Par ailleurs, les méthodes de raisonnement en matière d'investigations scientifiques dans une affaire judiciaires sont complexes. Leur mauvaise appréhension participe aux tensions qui sont relevées. Ces méthodes doivent être approfondies. Le raisonnement médical constitue un modèle possible. Il s'enrichit de travaux menés en sémiotique. La résolution des tensions passe par la mise en place d'un nouveau personnage, le coordinateur criminalistique. Cela constitue un changement paradigmatique et une nouvelle activité scientifique complexe. Ce scientifique s'associe à l'enquêteur et au magistrat tout au long du processus judiciaire, depuis la scène d'investigation jusqu'au procès pénal. Ce paradigme s'impose quel que soit le modèle judiciaire, accusatoire ou inquisitoire et les structures institutionnelles. Cette thèse propose que ce coordinateur criminalistique soit un scientifique de haut niveau qui bénéficie d'une solide formation théorique et pratique. Cette approche est fondamentalement éthique car elle se focalise sur un témoin matériel, garantit la préservation des droits humains et définit un processus transparent et équilibré dans l'élaboration de la preuve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Surveillance of multiple congenital anomalies is considered to be more sensitive for the detection of new teratogens than surveillance of all or isolated congenital anomalies. Current literature proposes the manual review of all cases for classification into isolated or multiple congenital anomalies. METHODS: Multiple anomalies were defined as two or more major congenital anomalies, excluding sequences and syndromes. A computer algorithm for classification of major congenital anomaly cases in the EUROCAT database according to International Classification of Diseases (ICD)v10 codes was programmed, further developed, and implemented for 1 year's data (2004) from 25 registries. The group of cases classified with potential multiple congenital anomalies were manually reviewed by three geneticists to reach a final agreement of classification as "multiple congenital anomaly" cases. RESULTS: A total of 17,733 cases with major congenital anomalies were reported giving an overall prevalence of major congenital anomalies at 2.17%. The computer algorithm classified 10.5% of all cases as "potentially multiple congenital anomalies". After manual review of these cases, 7% were agreed to have true multiple congenital anomalies. Furthermore, the algorithm classified 15% of all cases as having chromosomal anomalies, 2% as monogenic syndromes, and 76% as isolated congenital anomalies. The proportion of multiple anomalies varies by congenital anomaly subgroup with up to 35% of cases with bilateral renal agenesis. CONCLUSIONS: The implementation of the EUROCAT computer algorithm is a feasible, efficient, and transparent way to improve classification of congenital anomalies for surveillance and research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two spatial tasks were designed to test specific properties of spatial representation in rats. In the first task, rats were trained to locate an escape hole at a fixed position in a visually homogeneous arena. This arena was connected with a periphery where a full view of the room environment existed. Therefore, rats were dependent on their memory trace of the previous position in the periphery to discriminate a position within the central region. Under these experimental conditions, the test animals showed a significant discrimination of the training position without a specific local view. In the second task, rats were trained in a radial maze consisting of tunnels that were transparent at their distal ends only. Because the central part of the maze was non-transparent, rats had to plan and execute appropriate trajectories without specific visual feedback from the environment. This situation was intended to encourage the reliance on prospective memory of the non-visited arms in selecting the following move. Our results show that acquisition performance was only slightly decreased compared to that shown in a completely transparent maze and considerably higher than in a translucent maze or in darkness. These two series of experiments indicate (1) that rats can learn about the relative position of different places with no common visual panorama, and (2) that they are able to plan and execute a sequence of visits to several places without direct visual feed-back about their relative position.