890 resultados para Tchebyshev metrics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Schizophrenia is often considered as a dysconnection syndrome in which, abnormal interactions between large-scale functional brain networks result in cognitive and perceptual deficits. In this article we apply the graph theoretic measures to brain functional networks based on the resting EEGs of fourteen schizophrenic patients in comparison with those of fourteen matched control subjects. The networks were extracted from common-average-referenced EEG time-series through partial and unpartial cross-correlation methods. Unpartial correlation detects functional connectivity based on direct and/or indirect links, while partial correlation allows one to ignore indirect links. We quantified the network properties with the graph metrics, including mall-worldness, vulnerability, modularity, assortativity, and synchronizability. The schizophrenic patients showed method-specific and frequency-specific changes especially pronounced for modularity, assortativity, and synchronizability measures. However, the differences between schizophrenia patients and normal controls in terms of graph theory metrics were stronger for the unpartial correlation method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Iowa Department of Transportation (DOT) is responsible for approximately 4,100 bridges and structures that are a part of the state’s primary highway system, which includes the Interstate, US, and Iowa highway routes. A pilot study was conducted for six bridges in two Iowa river basins—the Cedar River Basin and the South Skunk River Basin—to develop a methodology to evaluate their vulnerability to climate change and extreme weather. The six bridges had been either closed or severely stressed by record streamflow within the past seven years. An innovative methodology was developed to generate streamflow scenarios given climate change projections. The methodology selected appropriate rainfall projection data to feed into a streamflow model that generated continuous peak annual streamflow series for 1960 through 2100, which were used as input to PeakFQ to estimate return intervals for floods. The methodology evaluated the plausibility of rainfall projections and credibility of streamflow simulation while remaining consistent with U.S. Geological Survey (USGS) protocol for estimating the return interval for floods. The results were conveyed in an innovative graph that combined historical and scenario-based design metrics for use in bridge vulnerability analysis and engineering design. The pilot results determined the annual peak streamflow response to climate change likely will be basin-size dependent, four of the six pilot study bridges would be exposed to increased frequency of extreme streamflow and would have higher frequency of overtopping, the proposed design for replacing the Interstate 35 bridges over the South Skunk River south of Ames, Iowa is resilient to climate change, and some Iowa DOT bridge design policies could be reviewed to consider incorporating climate change information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image registration has been proposed as an automatic method for recovering cardiac displacement fields from Tagged Magnetic Resonance Imaging (tMRI) sequences. Initially performed as a set of pairwise registrations, these techniques have evolved to the use of 3D+t deformation models, requiring metrics of joint image alignment (JA). However, only linear combinations of cost functions defined with respect to the first frame have been used. In this paper, we have applied k-Nearest Neighbors Graphs (kNNG) estimators of the -entropy (H ) to measure the joint similarity between frames, and to combine the information provided by different cardiac views in an unified metric. Experiments performed on six subjects showed a significantly higher accuracy (p < 0.05) with respect to a standard pairwise alignment (PA) approach in terms of mean positional error and variance with respect to manually placed landmarks. The developed method was used to study strains in patients with myocardial infarction, showing a consistency between strain, infarction location, and coronary occlusion. This paper also presentsan interesting clinical application of graph-based metric estimators, showing their value for solving practical problems found in medical imaging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, the Iowa Department of Transportation has used the Iowa Runoff Chart and single-variable regional-regression equations (RREs) from a U.S. Geological Survey report (published in 1987) as the primary methods to estimate annual exceedance-probability discharge (AEPD) for small (20 square miles or less) drainage basins in Iowa. With the publication of new multi- and single-variable RREs by the U.S. Geological Survey (published in 2013), the Iowa Department of Transportation needs to determine which methods of AEPD estimation provide the best accuracy and the least bias for small drainage basins in Iowa. Twenty five streamgages with drainage areas less than 2 square miles (mi2) and 55 streamgages with drainage areas between 2 and 20 mi2 were selected for the comparisons that used two evaluation metrics. Estimates of AEPDs calculated for the streamgages using the expected moments algorithm/multiple Grubbs-Beck test analysis method were compared to estimates of AEPDs calculated from the 2013 multivariable RREs; the 2013 single-variable RREs; the 1987 single-variable RREs; the TR-55 rainfall-runoff model; and the Iowa Runoff Chart. For the 25 streamgages with drainage areas less than 2 mi2, results of the comparisons seem to indicate the best overall accuracy and the least bias may be achieved by using the TR-55 method for flood regions 1 and 3 (published in 2013) and by using the 1987 single-variable RREs for flood region 2 (published in 2013). For drainage basins with areas between 2 and 20 mi2, results of the comparisons seem to indicate the best overall accuracy and the least bias may be achieved by using the 1987 single-variable RREs for the Southern Iowa Drift Plain landform region and for flood region 3 (published in 2013), by using the 2013 multivariable RREs for the Iowan Surface landform region, and by using the 2013 or 1987 single-variable RREs for flood region 2 (published in 2013). For all other landform or flood regions in Iowa, use of the 2013 single-variable RREs may provide the best overall accuracy and the least bias. An examination was conducted to understand why the 1987 single-variable RREs seem to provide better accuracy and less bias than either of the 2013 multi- or single-variable RREs. A comparison of 1-percent annual exceedance-probability regression lines for hydrologic regions 1–4 from the 1987 single-variable RREs and for flood regions 1–3 from the 2013 single-variable RREs indicates that the 1987 single-variable regional-regression lines generally have steeper slopes and lower discharges when compared to 2013 single-variable regional-regression lines for corresponding areas of Iowa. The combination of the definition of hydrologic regions, the lower discharges, and the steeper slopes of regression lines associated with the 1987 single-variable RREs seem to provide better accuracy and less bias when compared to the 2013 multi- or single-variable RREs; better accuracy and less bias was determined particularly for drainage areas less than 2 mi2, and also for some drainage areas between 2 and 20 mi2. The 2013 multi- and single-variable RREs are considered to provide better accuracy and less bias for larger drainage areas. Results of this study indicate that additional research is needed to address the curvilinear relation between drainage area and AEPDs for areas of Iowa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project resulted in the development of a framework for making asset management decisions on low-volume bridges. The research focused on low-volume bridges located in the agricultural counties of Iowa because recent research has shown that these counties have the greatest percentage of structurally deficient bridges in the nation. Many of the same counties also have the highest crop yields in the state, creating a situation where detours caused by deficient bridges on farm-to-market roads increase the cost to transport the crops. Thus, the research proposed the use of social return on investment (SROI), a tool used by international institutions such as the World Bank, as an asset management metric to gauge to the socioeconomic impact of structurally deficient bridges on the state in an effort to provide quantified justification to fund improvements on low-volume assets such as these rural bridges. The study found that combining SROI with current asset management metrics like average daily traffic (ADT) made it possible to prioritize the bridges in such a way that the limited resources available are allocated in a manner that promotes a more equitable distribution and that directly benefits the user, in this case Iowa farmers. The result is a system that more closely aligns itself with the spirit of MAP-21, in that infrastructure investments are used to facilitate economic growth for Iowa’s agricultural economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AbstractAlthough the genomes from any two human individuals are more than 99.99% identical at the sequence level, some structural variation can be observed. Differences between genomes include single nucleotide polymorphism (SNP), inversion and copy number changes (gain or loss of DNA). The latter can range from submicroscopic events (CNVs, at least 1kb in size) to complete chromosomal aneuploidies. Small copy number variations have often no (lethal) consequences to the cell, but a few were associated to disease susceptibility and phenotypic variations. Larger re-arrangements (i.e. complete chromosome gain) are frequently associated with more severe consequences on health such as genomic disorders and cancer. High-throughput technologies like DNA microarrays enable the detection of CNVs in a genome-wide fashion. Since the initial catalogue of CNVs in the human genome in 2006, there has been tremendous interest in CNVs both in the context of population and medical genetics. Understanding CNV patterns within and between human populations is essential to elucidate their possible contribution to disease. But genome analysis is a challenging task; the technology evolves rapidly creating needs for novel, efficient and robust analytical tools which need to be compared with existing ones. Also, while the link between CNV and disease has been established, the relative CNV contribution is not fully understood and the predisposition to disease from CNVs of the general population has not been yet investigated.During my PhD thesis, I worked on several aspects related to CNVs. As l will report in chapter 3, ! was interested in computational methods to detect CNVs from the general population. I had access to the CoLaus dataset, a population-based study with more than 6,000 participants from the Lausanne area. All these individuals were analysed on SNP arrays and extensive clinical information were available. My work explored existing CNV detection methods and I developed a variety of metrics to compare their performance. Since these methods were not producing entirely satisfactory results, I implemented my own method which outperformed two existing methods. I also devised strategies to combine CNVs from different individuals into CNV regions.I was also interested in the clinical impact of CNVs in common disease (chapter 4). Through an international collaboration led by the Centre Hospitalier Universitaire Vaudois (CHUV) and the Imperial College London I was involved as a main data analyst in the investigation of a rare deletion at chromosome 16p11 detected in obese patients. Specifically, we compared 8,456 obese patients and 11,856 individuals from the general population and we found that the deletion was accounting for 0.7% of the morbid obesity cases and was absent in healthy non- obese controls. This highlights the importance of rare variants with strong impact and provides new insights in the design of clinical studies to identify the missing heritability in common disease.Furthermore, I was interested in the detection of somatic copy number alterations (SCNA) and their consequences in cancer (chapter 5). This project was a collaboration initiated by the Ludwig Institute for Cancer Research and involved other groups from the Swiss Institute of Bioinformatics, the CHUV and Universities of Lausanne and Geneva. The focus of my work was to identify genes with altered expression levels within somatic copy number alterations (SCNA) in seven metastatic melanoma ceil lines, using CGH and SNP arrays, RNA-seq, and karyotyping. Very few SCNA genes were shared by even two melanoma samples making it difficult to draw any conclusions at the individual gene level. To overcome this limitation, I used a network-guided analysis to determine whether any pathways, defined by amplified or deleted genes, were common among the samples. Six of the melanoma samples were potentially altered in four pathways and five samples harboured copy-number and expression changes in components of six pathways. In total, this approach identified 28 pathways. Validation with two external, large melanoma datasets confirmed all but three of the detected pathways and demonstrated the utility of network-guided approaches for both large and small datasets analysis.RésuméBien que le génome de deux individus soit similaire à plus de 99.99%, des différences de structure peuvent être observées. Ces différences incluent les polymorphismes simples de nucléotides, les inversions et les changements en nombre de copies (gain ou perte d'ADN). Ces derniers varient de petits événements dits sous-microscopiques (moins de 1kb en taille), appelés CNVs (copy number variants) jusqu'à des événements plus large pouvant affecter des chromosomes entiers. Les petites variations sont généralement sans conséquence pour la cellule, toutefois certaines ont été impliquées dans la prédisposition à certaines maladies, et à des variations phénotypiques dans la population générale. Les réarrangements plus grands (par exemple, une copie additionnelle d'un chromosome appelée communément trisomie) ont des répercutions plus grave pour la santé, comme par exemple dans certains syndromes génomiques et dans le cancer. Les technologies à haut-débit telle les puces à ADN permettent la détection de CNVs à l'échelle du génome humain. La cartographie en 2006 des CNV du génome humain, a suscité un fort intérêt en génétique des populations et en génétique médicale. La détection de différences au sein et entre plusieurs populations est un élément clef pour élucider la contribution possible des CNVs dans les maladies. Toutefois l'analyse du génome reste une tâche difficile, la technologie évolue très rapidement créant de nouveaux besoins pour le développement d'outils, l'amélioration des précédents, et la comparaison des différentes méthodes. De plus, si le lien entre CNV et maladie a été établit, leur contribution précise n'est pas encore comprise. De même que les études sur la prédisposition aux maladies par des CNVs détectés dans la population générale n'ont pas encore été réalisées.Pendant mon doctorat, je me suis concentré sur trois axes principaux ayant attrait aux CNV. Dans le chapitre 3, je détaille mes travaux sur les méthodes d'analyses des puces à ADN. J'ai eu accès aux données du projet CoLaus, une étude de la population de Lausanne. Dans cette étude, le génome de plus de 6000 individus a été analysé avec des puces SNP et de nombreuses informations cliniques ont été récoltées. Pendant mes travaux, j'ai utilisé et comparé plusieurs méthodes de détection des CNVs. Les résultats n'étant pas complètement satisfaisant, j'ai implémenté ma propre méthode qui donne de meilleures performances que deux des trois autres méthodes utilisées. Je me suis aussi intéressé aux stratégies pour combiner les CNVs de différents individus en régions.Je me suis aussi intéressé à l'impact clinique des CNVs dans le cas des maladies génétiques communes (chapitre 4). Ce projet fut possible grâce à une étroite collaboration avec le Centre Hospitalier Universitaire Vaudois (CHUV) et l'Impérial College à Londres. Dans ce projet, j'ai été l'un des analystes principaux et j'ai travaillé sur l'impact clinique d'une délétion rare du chromosome 16p11 présente chez des patients atteints d'obésité. Dans cette collaboration multidisciplinaire, nous avons comparés 8'456 patients atteint d'obésité et 11 '856 individus de la population générale. Nous avons trouvés que la délétion était impliquée dans 0.7% des cas d'obésité morbide et était absente chez les contrôles sains (non-atteint d'obésité). Notre étude illustre l'importance des CNVs rares qui peuvent avoir un impact clinique très important. De plus, ceci permet d'envisager une alternative aux études d'associations pour améliorer notre compréhension de l'étiologie des maladies génétiques communes.Egalement, j'ai travaillé sur la détection d'altérations somatiques en nombres de copies (SCNA) et de leurs conséquences pour le cancer (chapitre 5). Ce projet fut une collaboration initiée par l'Institut Ludwig de Recherche contre le Cancer et impliquant l'Institut Suisse de Bioinformatique, le CHUV et les Universités de Lausanne et Genève. Je me suis concentré sur l'identification de gènes affectés par des SCNAs et avec une sur- ou sous-expression dans des lignées cellulaires dérivées de mélanomes métastatiques. Les données utilisées ont été générées par des puces ADN (CGH et SNP) et du séquençage à haut débit du transcriptome. Mes recherches ont montrées que peu de gènes sont récurrents entre les mélanomes, ce qui rend difficile l'interprétation des résultats. Pour contourner ces limitations, j'ai utilisé une analyse de réseaux pour définir si des réseaux de signalisations enrichis en gènes amplifiés ou perdus, étaient communs aux différents échantillons. En fait, parmi les 28 réseaux détectés, quatre réseaux sont potentiellement dérégulés chez six mélanomes, et six réseaux supplémentaires sont affectés chez cinq mélanomes. La validation de ces résultats avec deux larges jeux de données publiques, a confirmée tous ces réseaux sauf trois. Ceci démontre l'utilité de cette approche pour l'analyse de petits et de larges jeux de données.Résumé grand publicL'avènement de la biologie moléculaire, en particulier ces dix dernières années, a révolutionné la recherche en génétique médicale. Grâce à la disponibilité du génome humain de référence dès 2001, de nouvelles technologies telles que les puces à ADN sont apparues et ont permis d'étudier le génome dans son ensemble avec une résolution dite sous-microscopique jusque-là impossible par les techniques traditionnelles de cytogénétique. Un des exemples les plus importants est l'étude des variations structurales du génome, en particulier l'étude du nombre de copies des gènes. Il était établi dès 1959 avec l'identification de la trisomie 21 par le professeur Jérôme Lejeune que le gain d'un chromosome supplémentaire était à l'origine de syndrome génétique avec des répercussions graves pour la santé du patient. Ces observations ont également été réalisées en oncologie sur les cellules cancéreuses qui accumulent fréquemment des aberrations en nombre de copies (telles que la perte ou le gain d'un ou plusieurs chromosomes). Dès 2004, plusieurs groupes de recherches ont répertorié des changements en nombre de copies dans des individus provenant de la population générale (c'est-à-dire sans symptômes cliniques visibles). En 2006, le Dr. Richard Redon a établi la première carte de variation en nombre de copies dans la population générale. Ces découvertes ont démontrées que les variations dans le génome était fréquentes et que la plupart d'entre elles étaient bénignes, c'est-à-dire sans conséquence clinique pour la santé de l'individu. Ceci a suscité un très grand intérêt pour comprendre les variations naturelles entre individus mais aussi pour mieux appréhender la prédisposition génétique à certaines maladies.Lors de ma thèse, j'ai développé de nouveaux outils informatiques pour l'analyse de puces à ADN dans le but de cartographier ces variations à l'échelle génomique. J'ai utilisé ces outils pour établir les variations dans la population suisse et je me suis consacré par la suite à l'étude de facteurs pouvant expliquer la prédisposition aux maladies telles que l'obésité. Cette étude en collaboration avec le Centre Hospitalier Universitaire Vaudois a permis l'identification d'une délétion sur le chromosome 16 expliquant 0.7% des cas d'obésité morbide. Cette étude a plusieurs répercussions. Tout d'abord elle permet d'effectuer le diagnostique chez les enfants à naître afin de déterminer leur prédisposition à l'obésité. Ensuite ce locus implique une vingtaine de gènes. Ceci permet de formuler de nouvelles hypothèses de travail et d'orienter la recherche afin d'améliorer notre compréhension de la maladie et l'espoir de découvrir un nouveau traitement Enfin notre étude fournit une alternative aux études d'association génétique qui n'ont eu jusqu'à présent qu'un succès mitigé.Dans la dernière partie de ma thèse, je me suis intéressé à l'analyse des aberrations en nombre de copies dans le cancer. Mon choix s'est porté sur l'étude de mélanomes, impliqués dans le cancer de la peau. Le mélanome est une tumeur très agressive, elle est responsable de 80% des décès des cancers de la peau et est souvent résistante aux traitements utilisés en oncologie (chimiothérapie, radiothérapie). Dans le cadre d'une collaboration entre l'Institut Ludwig de Recherche contre le Cancer, l'Institut Suisse de Bioinformatique, le CHUV et les universités de Lausanne et Genève, nous avons séquencés l'exome (les gènes) et le transcriptome (l'expression des gènes) de sept mélanomes métastatiques, effectués des analyses du nombre de copies par des puces à ADN et des caryotypes. Mes travaux ont permis le développement de nouvelles méthodes d'analyses adaptées au cancer, d'établir la liste des réseaux de signalisation cellulaire affectés de façon récurrente chez le mélanome et d'identifier deux cibles thérapeutiques potentielles jusqu'alors ignorées dans les cancers de la peau.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Treball fi de carrera sobre les mètriques de productivitat de programari per a la gestió de projectes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Validation is the main bottleneck preventing theadoption of many medical image processing algorithms inthe clinical practice. In the classical approach,a-posteriori analysis is performed based on someobjective metrics. In this work, a different approachbased on Petri Nets (PN) is proposed. The basic ideaconsists in predicting the accuracy that will result froma given processing based on the characterization of thesources of inaccuracy of the system. Here we propose aproof of concept in the scenario of a diffusion imaginganalysis pipeline. A PN is built after the detection ofthe possible sources of inaccuracy. By integrating thefirst qualitative insights based on the PN withquantitative measures, it is possible to optimize the PNitself, to predict the inaccuracy of the system in adifferent setting. Results show that the proposed modelprovides a good prediction performance and suggests theoptimal processing approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this project was to evaluate low-cost measures to reduce speeds on high-crash horizontal curves. The researchers evaluated two low-cost treatments in Iowa to determine their effectiveness in reducing speeds on rural two-lane roadways. This report summarizes how the research team selected sites and collected data, and the results. The team selected six sites. Retroreflective post treatments were added to existing chevrons at four sites and on-pavement curve markings were added at two sites. The researchers collected speed data before and after installation of the two treatments. The study compared several speed metrics to assess the effectiveness of the treatments. Overall, both were moderately effective in reducing speeds. The most significant impact of the treatments was in reducing the percentage of vehicles traveling over the posted or advisory speed by 5, 10, 15, or 20 or more mph. This result suggests that the treatments are most effective in reducing high-end speeds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanotechnology is becoming part of our daily life in a wide range of products such as computers, bicycles, sunscreens or nanomedicines. While these applications already become reality, considerable work awaits scientists, engineers, and policy makers, who want such nanotechnological products to yield a maximum of benefit at a minimum of social, environmental, economic and (occupational) health cost. Considerable efforts for coordination and collaboration in research are needed if one wants to reach these goals in a reasonable time frame and an affordable price tag. This is recognized in Europe by the European Commission which funds not only research projects but also supports the coordination of research efforts. One of these coordination efforts is NanoImpactNet, a researcher-operated network, which started in 2008 promote scientific cross-talk across all disciplines on the health and environmental impact of nanomaterials. Stakeholders contribute to these activities, notably the definition of research and knowledge needs. Initial discussions in this domain focused on finding an agreement on common metrics, and which elements are needed for standardized approaches for hazard and exposure identification. There are many nanomaterial properties that may play a role. Hence, to gain the time needed to study this complex matter full of uncertainties, researchers and stakeholders unanimously called for simple, easy and fast risk assessment tools that can support decision making in this rapidly moving and growing domain. Today, several projects are starting or already running that will develop such assessment tools. At the same time, other projects investigate in depth which factors and material properties can lead to unwanted toxicity or exposure, what mechanisms are involved and how such responses can be predicted and modelled. A vision for the future is that once these factors, properties and mechanisms are understood, they can and will be accounted for in the development of new products and production processes following the idea of "Safety by Design". The promise of all these efforts is a future with nanomaterials where most of their risks are recognized and addressed before they even reach the market.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses the qualitativecomparative evaluation performed on theresults of two machine translation systemswith different approaches to the processing ofmulti-word units. It proposes a solution forovercoming the difficulties multi-word unitspresent to machine translation by adopting amethodology that combines the lexicongrammar approach with OpenLogos ontologyand semantico-syntactic rules. The paper alsodiscusses the importance of a qualitativeevaluation metrics to correctly evaluate theperformance of machine translation engineswith regards to multi-word units.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is currently a considerable diversity of quantitative measures available for summarizing the results in single-case studies. Given that the interpretation of some of them is difficult due to the lack of established benchmarks, the current paper proposes an approach for obtaining further numerical evidence on the importance of the results, complementing the substantive criteria, visual analysis, and primary summary measures. This additional evidence consists of obtaining the statistical significance of the outcome when referred to the corresponding sampling distribution. This sampling distribution is formed by the values of the outcomes (expressed as data nonoverlap, R-squared, etc.) in case the intervention is ineffective. The approach proposed here is intended to offer the outcome"s probability of being as extreme when there is no treatment effect without the need for some assumptions that cannot be checked with guarantees. Following this approach, researchers would compare their outcomes to reference values rather than constructing the sampling distributions themselves. The integration of single-case studies is problematic, when different metrics are used across primary studies and not all raw data are available. Via the approach for assigning p values it is possible to combine the results of similar studies regardless of the primary effect size indicator. The alternatives for combining probabilities are discussed in the context of single-case studies pointing out two potentially useful methods one based on a weighted average and the other on the binomial test.