917 resultados para Due Process of Law


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is a life-long, potentially debilitating disease of the central nervous system (CNS). MS is considered to be an immune-mediated disease, and the presence of autoreactive peripheral lymphocytes in CNS compartments is believed to be critical in the process of demyelination and tissue damage in MS. Although MS is not currently a curable disease, several disease-modifying therapies (DMTs) are now available, or are in development. These DMTs are all thought to primarily suppress autoimmune activity within the CNS. Each therapy has its own mechanism of action (MoA) and, as a consequence, each has a different efficacy and safety profile. Neurologists can now select therapies on a more individual, patient-tailored basis, with the aim of maximizing potential for long-term efficacy without interruptions in treatment. The MoA and clinical profile of MS therapies are important considerations when making that choice or when switching therapies due to suboptimal disease response. This article therefore reviews the known and putative immunological MoAs alongside a summary of the clinical profile of therapies approved for relapsing forms of MS, and those in late-stage development, based on published data from pivotal randomized, controlled trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study I try to explain the systemic problem of the low economic competitiveness of nuclear energy for the production of electricity by carrying out a biophysical analysis of its production process. Given the fact that neither econometric approaches nor onedimensional methods of energy analyses are effective, I introduce the concept of biophysical explanation as a quantitative analysis capable of handling the inherent ambiguity associated with the concept of energy. In particular, the quantities of energy, considered as relevant for the assessment, can only be measured and aggregated after having agreed on a pre-analytical definition of a grammar characterizing a given set of finite transformations. Using this grammar it becomes possible to provide a biophysical explanation for the low economic competitiveness of nuclear energy in the production of electricity. When comparing the various unit operations of the process of production of electricity with nuclear energy to the analogous unit operations of the process of production of fossil energy, we see that the various phases of the process are the same. The only difference is related to characteristics of the process associated with the generation of heat which are completely different in the two systems. Since the cost of production of fossil energy provides the base line of economic competitiveness of electricity, the (lack of) economic competitiveness of the production of electricity from nuclear energy can be studied, by comparing the biophysical costs associated with the different unit operations taking place in nuclear and fossil power plants when generating process heat or net electricity. In particular, the analysis focuses on fossil-fuel requirements and labor requirements for those phases that both nuclear plants and fossil energy plants have in common: (i) mining; (ii) refining/enriching; (iii) generating heat/electricity; (iv) handling the pollution/radioactive wastes. By adopting this approach, it becomes possible to explain the systemic low economic competitiveness of nuclear energy in the production of electricity, because of: (i) its dependence on oil, limiting its possible role as a carbon-free alternative; (ii) the choices made in relation to its fuel cycle, especially whether it includes reprocessing operations or not; (iii) the unavoidable uncertainty in the definition of the characteristics of its process; (iv) its large inertia (lack of flexibility) due to issues of time scale; and (v) its low power level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary (in English) Computer simulations provide a practical way to address scientific questions that would be otherwise intractable. In evolutionary biology, and in population genetics in particular, the investigation of evolutionary processes frequently involves the implementation of complex models, making simulations a particularly valuable tool in the area. In this thesis work, I explored three questions involving the geographical range expansion of populations, taking advantage of spatially explicit simulations coupled with approximate Bayesian computation. First, the neutral evolutionary history of the human spread around the world was investigated, leading to a surprisingly simple model: A straightforward diffusion process of migrations from east Africa throughout a world map with homogeneous landmasses replicated to very large extent the complex patterns observed in real human populations, suggesting a more continuous (as opposed to structured) view of the distribution of modern human genetic diversity, which may play a better role as a base model for further studies. Second, the postglacial evolution of the European barn owl, with the formation of a remarkable coat-color cline, was inspected with two rounds of simulations: (i) determine the demographic background history and (ii) test the probability of a phenotypic cline, like the one observed in the natural populations, to appear without natural selection. We verified that the modern barn owl population originated from a single Iberian refugium and that they formed their color cline, not due to neutral evolution, but with the necessary participation of selection. The third and last part of this thesis refers to a simulation-only study inspired by the barn owl case above. In this chapter, we showed that selection is, indeed, effective during range expansions and that it leaves a distinguished signature, which can then be used to detect and measure natural selection in range-expanding populations. Résumé (en français) Les simulations fournissent un moyen pratique pour répondre à des questions scientifiques qui seraient inabordable autrement. En génétique des populations, l'étude des processus évolutifs implique souvent la mise en oeuvre de modèles complexes, et les simulations sont un outil particulièrement précieux dans ce domaine. Dans cette thèse, j'ai exploré trois questions en utilisant des simulations spatialement explicites dans un cadre de calculs Bayésiens approximés (approximate Bayesian computation : ABC). Tout d'abord, l'histoire de la colonisation humaine mondiale et de l'évolution de parties neutres du génome a été étudiée grâce à un modèle étonnement simple. Un processus de diffusion des migrants de l'Afrique orientale à travers un monde avec des masses terrestres homogènes a reproduit, dans une très large mesure, les signatures génétiques complexes observées dans les populations humaines réelles. Un tel modèle continu (opposé à un modèle structuré en populations) pourrait être très utile comme modèle de base dans l'étude de génétique humaine à l'avenir. Deuxièmement, l'évolution postglaciaire d'un gradient de couleur chez l'Effraie des clocher (Tyto alba) Européenne, a été examiné avec deux séries de simulations pour : (i) déterminer l'histoire démographique de base et (ii) tester la probabilité qu'un gradient phénotypique, tel qu'observé dans les populations naturelles puisse apparaître sans sélection naturelle. Nous avons montré que la population actuelle des chouettes est sortie d'un unique refuge ibérique et que le gradient de couleur ne peux pas s'être formé de manière neutre (sans l'action de la sélection naturelle). La troisième partie de cette thèse se réfère à une étude par simulations inspirée par l'étude de l'Effraie. Dans ce dernier chapitre, nous avons montré que la sélection est, en effet, aussi efficace dans les cas d'expansion d'aire de distribution et qu'elle laisse une signature unique, qui peut être utilisée pour la détecter et estimer sa force.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intestinal microbiota, a barrier to the establishment of pathogenic bacteria, is also an important reservoir of opportunistic pathogens. It plays a key role in the process of resistance-genes dissemination, commonly carried by specialized genetic elements, like plasmids, phages, and conjugative transposons. We obtained from strains of enterobacteria, isolated from faeces of newborns in a university hospital nursery, indication of phenothypical gentamicin resistance amplification (frequencies of 10-3 to 10-5, compatible with transposition frequencies). Southern blotting assays showed strong hybridization signals for both plasmidial and chromossomal regions in DNA extracted from variants selected at high gentamicin concentrations, using as a probe a labeled cloned insert containing aminoglycoside modifying enzyme (AME) gene sequence originated from a plasmid of a Klebsiella pneumoniae strain previously isolated in the same hospital. Further, we found indications of inactivation to other resistance genes in variants selected under similar conditions, as well as, indications of co-amplification of other AME markers (amikacin). Since the intestinal environment is a scenario of selective processes due to the therapeutic and prophylactic use of antimicrobial agents, the processes of amplification of low level antimicrobial resistance (not usually detected or sought by common methods used for antibiotic resistance surveillance) might compromise the effectiveness of antibiotic chemotherapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a la London School of Economics and Political Science, United Kingdom, entre 2007 i 2009. L’objecte principal del projecte ha estat analitzar les implicacions jurídico-polítiques i institucionals d’una teoria de la justícia i la igualtat liberals aplicada a societats multiculturals amb un marcat predomini de la diversitat cultural. L’anàlisi desenvolupa una línia d'investigació interdisciplinar - entre el dret i la teoria política - iniciada en una tesis doctoral sobre multiculturalisme i drets de les minories culturals (UPF, 2000) que va culminar en la publicació de Group Rights as Human Rights (Springer, 2006). La recerca adopta com a punt de partida les conclusions de l'esmentada obra, en especial, la rellevància del reconeixement de drets col•lectius; tanmateix, el tipus de qüestions plantejades, l’enfoc i la metodologia emprades són substancialment diferents. En concret, s'adrecen preguntes específiques sobre el model i aspiracions del constitucionalisme democràtic i el paper del dret en contextos multiculturals. També s’atorga un pes central a la dimensió institucional dels models de gestió de la diversitat que s’analitzen, prioritzant un enfocament comparatiu a partir de l’estudi de controvèrsies concretes. L’objectiu és superar algunes limitacions importants de la literatura actual, com ara la tendència a examinar en abstracte la compatibilitat de determinades demandes amb el constitucionalisme democràtic, sense abordar el funcionament d'estratègies de gestió de la diversitat cultural emprades en contextos concrets. Els treballs producte d'aquest projecte articulen les línies bàsiques d’un model pluralista, basat en principis més que en regles, que desafia els plantejaments dominants actualment. Aquest model es caracteritza pel compromís amb la legitimitat i igualtat comparatives, rebutjant el paternalisme i les visions liberals típiques sobre el paper de la regulació. La presumpció de l’“standing” moral dels grups identitaris és fonamental per tal de considerar-los interlocutors vàlids amb interessos genuïns. També s’argumenta que la integració social en contextos multiculturals no depèn tant de l’eliminació del conflicte sinó, sobre tot, d’una gestió eficient que eviti abusos de poder sistemàtics. El model defensa el rol del dret en la institucionalització del diàleg intercultural, però admet que el diàleg no necessàriament condueix a l’acord o a una estructura reguladora coherent i uniforme. Les aspiracions del ordre jurídic pluralista són més modestes: afavorir la negociació i resolució en cada conflicte, malgrat la persistència de la fragmentació i la provisionalitat dels acords. La manca d'un marc regulador comú esdevé una virtut en la mesura que permet la interacció de diferents subordres; una interacció governada per una multiplicitat de regles no necessàriament harmòniques. Els avantatges i problemes d’aquest model s'analitzen a partir de l'anàlisi de l’estructura fragmentària de l'ordre jurídic internacional i del règim Europeu de drets humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agates from the Bighorn district in Montana (USA), the so-called Dryhead area, and their adjacent host rocks have been examined in the present study. Analyses by XRD, polarizing microscopy, LA-ICP-MS, cathodoluminescence (CL), SEM and of oxygen isotopes were performed to obtain information surrounding the genesis of this agate type. Investigations of the agate microstructure by polarizing microscopy and CL showed that chalcedony layers and macrocrystalline quartz crystals may have formed by crystallization from the same silica source by a process of self-organization. High defect densities and internal structures (e. g. sector zoning) of quartz indicate that crystallization went rapidly under non-equilibrium conditions. Most trace-element contents in macrocrystalline quartz are less than in chalcedony due to a process of `self-purification', which also caused the formation of Fe oxide inclusions and spherules. Although the agates formed in sedimentary host rocks, analytical data indicate participation of hydrothermal fluids during agate formation. Trace elements (REE distribution patterns, U contents up to 70 ppm) and CL features of agate (transient blue CL), as well as associated minerals (fluorite, REE carbonates) point to the influence of hydrothermal processes on the genesis of the Dryhead agates. However, formation temperatures <120 degrees C were calculated from O-isotope compositions between 28.9 parts per thousand (quartz) and 32.2 parts per thousand (chalcedony).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: In Switzerland, nationwide large-scale radon surveys have been conducted since the early 1980s to establish the distribution of indoor radon concentrations (IRC). The aim of this work was to study the factors influencing IRC in Switzerland using univariate analyses that take into account biases caused by spatial irregularities of sampling. METHODS: About 212,000 IRC measurements carried out in more than 136,000 dwellings were available for this study. A probability map to assess risk of exceeding an IRC of 300 Bq/m(3) was produced using basic geostatistical techniques. Univariate analyses of IRC for different variables, namely the type of radon detector, various building characteristics such as foundation type, year of construction and building type, as well as the altitude, the average outdoor temperature during measurement and the lithology, were performed comparing 95% confidence intervals among classes of each variable. Furthermore, a map showing the spatial aggregation of the number of measurements was generated for each class of variable in order to assess biases due to spatially irregular sampling. RESULTS: IRC measurements carried out with electret detectors were 35% higher than measurements performed with track detectors. Regarding building characteristics, the IRC of apartments are significantly lower than individual houses. Furthermore, buildings with concrete foundations have the lowest IRC. A significant decrease in IRC was found in buildings constructed after 1900 and again after 1970. Moreover, IRC decreases at higher outdoor temperatures. There is also a tendency to have higher IRC with altitude. Regarding lithology, carbonate rock in the Jura Mountains produces significantly higher IRC, almost by a factor of 2, than carbonate rock in the Alps. Sedimentary rock and sediment produce the lowest IRC while carbonate rock from the Jura Mountains and igneous rock produce the highest IRC. Potential biases due to spatially unbalanced sampling of measurements were identified for several influencing factors. CONCLUSIONS: Significant associations were found between IRC and all variables under study. However, we showed that the spatial distribution of samples strongly affected the relevance of those associations. Therefore, future methods to estimate local radon hazards should take the multidimensionality of the process of IRC into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today's approach to anti-doping is mostly centered on the judicial process, despite pursuing a further goal in the detection, reduction, solving and/or prevention of doping. Similarly to decision-making in the area of law enforcement feeding on Forensic Intelligence, anti-doping might significantly benefit from a more extensive gathering of knowledge. Forensic Intelligence might bring a broader logical dimension to the interpretation of data on doping activities for a more future-oriented and comprehensive approach instead of the traditional case-based and reactive process. Information coming from a variety of sources related to doping, whether directly or potentially, would feed an organized memory to provide real time intelligence on the size, seriousness and evolution of the phenomenon. Due to the complexity of doping, integrating analytical chemical results and longitudinal monitoring of biomarkers with physiological, epidemiological, sociological or circumstantial information might provide a logical framework enabling fit for purpose decision-making. Therefore, Anti-Doping Intelligence might prove efficient at providing a more proactive response to any potential or emerging doping phenomenon or to address existing problems with innovative actions or/and policies. This approach might prove useful to detect, neutralize, disrupt and/or prevent organized doping or the trafficking of doping agents, as well as helping to refine the targeting of athletes or teams. In addition, such an intelligence-led methodology would serve to address doping offenses in the absence of adverse analytical chemical evidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colour image segmentation based on the hue component presents some problems due to the physical process of image formation. One of that problems is colour clipping, which appear when at least one of the sensor components is saturated. We have designed a system, that works for a trained set of colours, to recover the chromatic information of those pixels on which colour has been clipped. The chromatic correction method is based on the fact that hue and saturation are invariant to the uniform scaling of the three RGB components. The proposed method has been validated by means of a specific colour image processing board that has allowed its execution in real time. We show experimental results of the application of our method

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray is a technology that is used for numerous applications in the medical field. The process of X-ray projection gives a 2-dimension (2D) grey-level texture from a 3- dimension (3D) object. Until now no clear demonstration or correlation has positioned the 2D texture analysis as a valid indirect evaluation of the 3D microarchitecture. TBS is a new texture parameter based on the measure of the experimental variogram. TBS evaluates the variation between 2D image grey-levels. The aim of this study was to evaluate existing correlations between 3D bone microarchitecture parameters - evaluated from μCT reconstructions - and the TBS value, calculated on 2D projected images. 30 dried human cadaveric vertebrae were acquired on a micro-scanner (eXplorer Locus, GE) at isotropic resolution of 93 μm. 3D vertebral body models were used. The following 3D microarchitecture parameters were used: Bone volume fraction (BV/TV), Trabecular thickness (TbTh), trabecular space (TbSp), trabecular number (TbN) and connectivity density (ConnD). 3D/2D projections has been done by taking into account the Beer-Lambert Law at X-ray energy of 50, 100, 150 KeV. TBS was assessed on 2D projected images. Correlations between TBS and the 3D microarchitecture parameters were evaluated using a linear regression analysis. Paired T-test is used to assess the X-ray energy effects on TBS. Multiple linear regressions (backward) were used to evaluate relationships between TBS and 3D microarchitecture parameters using a bootstrap process. BV/TV of the sample ranged from 18.5 to 37.6% with an average value at 28.8%. Correlations' analysis showedthat TBSwere strongly correlatedwith ConnD(0.856≤r≤0.862; p<0.001),with TbN (0.805≤r≤0.810; p<0.001) and negatively with TbSp (−0.714≤r≤−0.726; p<0.001), regardless X-ray energy. Results show that lower TBS values are related to "degraded" microarchitecture, with low ConnD, low TbN and a high TbSp. The opposite is also true. X-ray energy has no effect onTBS neither on the correlations betweenTBS and the 3Dmicroarchitecture parameters. In this study, we demonstrated that TBS was significantly correlated with 3D microarchitecture parameters ConnD and TbN, and negatively with TbSp, no matter what X-ray energy has been used. This article is part of a Special Issue entitled ECTS 2011. Disclosure of interest: None declared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intravital diagnosis of intracranial arterial dissection is not always possible due to atypic and non-specific clinical and radiological presentations. The postmortem pathological examination of cerebral blood vessels is therefore necessary to establish or confirm the presence of a dissecting aneurysm of intracranial arteries. Most of the described cases showed no significant underlying vascular pathology. Here we present the case of a 24-year-old women who died 5 days after admission to the hospital for a rapidly developing right-sided hemisyndrome. Neuroradiological examination had revealed ill-defined bifrontal hypodense lesions and angiographic findings were compatible with a dissection of the left extracranial internal carotid artery with embolic subocclusion of both anterior cerebral arteries. The pathological evaluation ruled out a thromboembolic occlusion of cerebral arteries and an extracranial internal carotid artery dissection but showed an extended dissecting process of variable age in the anterior circulation of the circle of Willis. The dissected vessels showed pathological changes characteristic of segmental mediolytic "arteritis" [Slavin and Gonzalez-Vitale 1976]. To our knowledge this is the first report on intracranial arteries being affected by this pathologic entity. Our case illustrates the importance of a postmortem examination of dissecting aneurysms of intracranial arteries. Careful serial section studies of dissected intracranial arteries in young subjects should be performed and may allow for a better understanding of the vascular pathology underlying the dissection processus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peroxisome proliferator-activated receptors (PPARs) are essential in glucose and lipid metabolism and are implicated in metabolic disorders predisposing to atherosclerosis, such as diabetes and dyslipidemia. Conversely, antidiabetic glitazones and hypolipidemic fibrate drugs, known as PPARgamma and PPARalpha ligands, respectively, reduce the process of atherosclerotic lesion formation, which involves chronic immunoinflammatory processes. Major histocompatibility complex class II (MHC-II) molecules, expressed on the surface of specialized cells, are directly involved in the activation of T lymphocytes and in the control of the immune response. Interestingly, expression of MHC-II has recently been observed in atherosclerotic plaques, and it can be induced by the proinflammatory cytokine interferon-gamma (IFN-gamma) in vascular cells. To explore a possible role for PPAR ligands in the regulation of the immune response, we investigated whether PPAR activation affects MHC-II expression in atheroma-associated cells. In the present study, we demonstrate that PPARgamma but not PPARalpha ligands act as inhibitors of IFN-gamma-induced MHC-II expression and thus as repressors of MHC-II-mediated T-cell activation. All different types of PPARgamma ligands tested inhibit MHC-II. This effect of PPARgamma ligands is due to a specific inhibition of promoter IV of CIITA and does not concern constitutive expression of MHC-II. Thus, the beneficial effects of antidiabetic PPARgamma activators on atherosclerotic plaque development may be partly explained by their repression of MHC-II expression and subsequent inhibition of T-lymphocyte activation.