934 resultados para Bifurcation Diagrams
Resumo:
El següent document recull la documentació tècnica generada durant el desenvolupament del Sistema gestor d'empresa d'excavacions amb tecnologia J2EE i el conjunt de decisions que s'han pres per a portar-lo a terme en el termini previst.Aquest, es divideix principalment en 9 capítols:El primer, en el que es defineixen les característiques principals per a la gestió del projecte.En el segon recull els requeriments, l'anàlisi i el disseny del mateix projecte,acompanyats dels respectius diagrames de casos d'ús i de seqüència.En el tercer es descriu el model de dades utilitzat.El quart està format pel model físic del sistema on es defineix la arquitectura, lescaracterístiques tècniques principals del sistema i de la seva implementació, i les decisions adoptades durant el desenvolupament.El cinquè és un recull dels canvis realitzats sobre el disseny original.El sisè està dedicat a les proves de testeig realitzades un cop finalitzat el desenvolupament.El setè, és un manual d'instal·lació, on es descriuen les especificacions necessàries i els passos a seguir per a poder-lo desplegar.En el vuitè es fa una breu valoració econòmica de l'aplicació conjuntament amb un estudi d'oportunitat.I el novè, on es descriuen les conclusions sobre la realització d'aquest treball de final de carrera.
Resumo:
Documentación (requisitos, diagramas UML, etc.) para la elaboración de una aplicación para la gestión de una autoescuela.
Resumo:
Introduction: Coronary magnetic resonance angiography (MRA) is a medical imaging technique that involves collecting data from consecutive heartbeats, always at the same time in the cardiac cycle, in order to minimize heart motion artifacts. This technique relies on the assumption that coronary arteries always follow the same trajectory from heartbeat to heartbeat. Until now, choosing the acquisition window in the cardiac cycle was based exclusively on the position of minimal coronary motion. The goal of this study was to test the hypothesis that there are time intervals during the cardiac cycle when coronary beat-to-beat repositioning is optimal. The repositioning uncertainty values in these time intervals were then compared with the intervals of low coronary motion in order to propose an optimal acquisition window for coronary MRA. Methods: Cine breath-hold x-ray angiograms with synchronous ECG were collected from 11 patients who underwent elective routine diagnostic coronarography. Twenty-three bifurcations of the left coronary artery were selected as markers to evaluate repositioning uncertainty and velocity during cardiac cycle. Each bifurcation was tracked by two observers, with the help of a user-assisted algorithm implemented in Matlab (The Mathworks, Natick, MA, USA) that compared the trajectories of the markers coming from consecutive heartbeats and computed the coronary repositioning uncertainty with steps of 50ms until 650ms after the R-wave. Repositioning uncertainty was defined as the diameter of the smallest circle encompassing the points to be compared at the same time after the R-wave. Student's t-tests with a false discovery rate (FDR, q=0.1) correction for multiple comparison were applied to see whether coronary repositioning and velocity vary statistically during cardiac cycle. Bland-Altman plots and linear regression were used to assess intra- and inter-observer agreement. Results: The analysis of left coronary artery beat-to-beat repositioning uncertainty shows a tendency to have better repositioning in mid systole (less than 0.84±0.58mm) and mid diastole (less than 0.89±0.6mm) than in the rest of the cardiac cycle (highest value at 50ms=1.35±0.64mm). According to Student's t-tests with FDR correction for multiple comparison (q=0.1), two intervals, in mid systole (150-200ms) and mid diastole (550-600ms), provide statistically better repositioning in comparison with the early systole and the early diastole. Coronary velocity analysis reveals that left coronary artery moves more slowly in end systole (14.35±11.35mm/s at 225ms) and mid diastole (11.78±11.62mm/s at 625ms) than in the rest of the cardiac cycle (highest value at 25ms: 55.96±22.34mm/s). This was confirmed by Student's t-tests with FDR correction for multiple comparison (q=0.1, FDR-corrected p-value=0.054): coronary velocity values at 225, 575 and 625ms are not much different between them but they are statistically inferior to all others. Bland-Altman plots and linear regression show that intra-observer agreement (y=0.97x+0.02 with R²=0.93 at 150ms) is better than inter-observer (y=0.8x+0.11 with R²=0.67 at 150ms). Discussion: The present study has demonstrated that there are two time intervals in the cardiac cycle, one in mid systole and one in mid diastole, where left coronary artery repositioning uncertainty reaches points of local minima. It has also been calculated that the velocity is the lowest in end systole and mid diastole. Since systole is less influenced by heart rate variability than diastole, it was finally proposed to test an acquisition window between 150 and 200ms after the R-wave.
Resumo:
La revolució del programari lliure ha estat i és un dels fenòmens més imprevisibles i de major abast que han sorgit des del desenvolupament de les Tecnologies de la Informació. No obstant això és un fenomen que, a causa de pròpia construcció social i distribuïda, està intrínsecament unit a la noció de conflicte, la manifestació del qual més notòria és la bifurcació o fork. En el present treball analitzem els diferents tipus de conflictes i els mecanismes habitualment emprats per a la resolució dels mateixos, i si formen part dels denominats mètodes alternatius de resolució de controvèrsies en Línia o ODR o si són part d'una categoria diferent i separada.
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariableswith some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependenceof a composition with a categorical variable, a colored set of ternary diagrams mightbe a good idea for a first look at the data, but it will fast hide important aspects ifthe composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if theconventional, black-box ilr is used.Thinking on terms of the Euclidean structure of the simplex, we suggest to set upappropriate projections, which on one side show the compositional geometry and on theother side are still comprehensible by a non-expert analyst, readable for all locations andscales of the data. This is e.g. done by defining special balance displays with carefully-selected axes. Following this idea, we need to systematically ask how to display, explore,describe, and test the relation to complementary or explanatory data of categorical, real,ratio or again compositional scales.This contribution shows that it is sufficient to use some basic concepts and very fewadvanced tools from multivariate statistics (principal covariances, multivariate linearmodels, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariateanalysis
Resumo:
”compositions” is a new R-package for the analysis of compositional and positive data.It contains four classes corresponding to the four different types of compositional andpositive geometry (including the Aitchison geometry). It provides means for computation,plotting and high-level multivariate statistical analysis in all four geometries.These geometries are treated in an fully analogous way, based on the principle of workingin coordinates, and the object-oriented programming paradigm of R. In this way,called functions automatically select the most appropriate type of analysis as a functionof the geometry. The graphical capabilities include ternary diagrams and tetrahedrons,various compositional plots (boxplots, barplots, piecharts) and extensive graphical toolsfor principal components. Afterwards, ortion and proportion lines, straight lines andellipses in all geometries can be added to plots. The package is accompanied by ahands-on-introduction, documentation for every function, demos of the graphical capabilitiesand plenty of usage examples. It allows direct and parallel computation inall four vector spaces and provides the beginner with a copy-and-paste style of dataanalysis, while letting advanced users keep the functionality and customizability theydemand of R, as well as all necessary tools to add own analysis routines. A completeexample is included in the appendix
Resumo:
R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets
Resumo:
Objective: Intimal hyperplasia (IH) is one of the leading causes of failure¦after vascular interventions. It involves the proliferation of smooth muscle¦cells (SMCs) and the production of extracellular fibrous matrix. Gap junctional¦communication, mediated by membrane connexins (Cx), participates to the¦control of proliferation and migration. In human and mice vessels, endothelial¦cells (ECs) express Cx37, Cx40 and Cx43, whereas SMCs are coupled by Cx43.¦We previously reported that Cx43 was increased in the SMCs of a human vein¦during the development of IH.¦In our experimental model of mice carotid artery ligation (CAL), luminal¦narrowing occurred by SMCs-rich neointima after 2-4 weeks of ligation.¦This experimental model of mice allows us to decipher the regulation of the¦cardiovascular connexins in the mouse.¦Methods: C57BL/6 mice were anesthetized and the left common carotid artery¦was dissected through a neck incision and ligated near the carotid bifurcation.¦The mice were then euthanized at 7, 14 and 28 days. Morphometric analyses¦were then performed with measurements of total area, lumen and intimal area¦and media thickness. Western blots, immunocytochemistry and quantitative¦RT-PCR were performed for Cx43, Cx40 and Cx37.¦Results: All animals recovered with no symptom of stroke. Morphometric¦analysis demonstrated that carotid ligation resulted in an initial increase (after¦7 days) of the total vessel area followed by its reduction (after 28 days). This¦phenomena was associated with a progressive increase in the intimal area and a¦consecutive decrease of the lumen. The media thickness was also increased after¦14 and 28 days. This neointima formation was associated to a marked increase¦in the expression of Cx43 at both protein and RNA levels. Concomitantly,¦Cx40 and Cx37 protein expression were reduced in the endothelium. This was¦confirmed by en face analyses showing reduced Cx37 and Cx40 levels in the¦endothelial cells covering the lesion.¦Conclusion: This study assessed the regulation of the cardiovascular connexins¦in the development of IH. This model will allow us to characterize the¦involvement of gap junctions in the IH. In turn, this understanding is¦instrumental for the development of new therapeutical tools, as well as for¦the evaluation of the effects of drugs and gene therapies of this disease for which¦there is no efficient therapy available.
Resumo:
In this paper, a case of post-traumatic thrombosis in the internal carotid artery after a blow with a ball in the neck of a 33-year-old male is presented. The death came 10 days after the coup as a result of intracranial hypertension and cerebral herniation secondary to ischemic infarction affecting the entire territory of the middle right cerebral artery, both superficial and profound. Macroscopic and microscopic findings that largely explain the mechanism of vascular injury with intimal dissection in the proximity of an atheroma plaque located above the carotid bifurcation are discussed.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
We introduce disk matrices which encode the knotting of all subchains in circular knot configurations. The disk matrices allow us to dissect circular knots into their subknots, i.e. knot types formed by subchains of the global knot. The identification of subknots is based on the study of linear chains in which a knot type is associated to the chain by means of a spatially robust closure protocol. We characterize the sets of observed subknot types in global knots taking energy-minimized shapes such as KnotPlot configurations and ideal geometric configurations. We compare the sets of observed subknots to knot types obtained by changing crossings in the classical prime knot diagrams. Building upon this analysis, we study the sets of subknots in random configurations of corresponding knot types. In many of the knot types we analyzed, the sets of subknots from the ideal geometric configurations are found in each of the hundreds of random configurations of the same global knot type. We also compare the sets of subknots observed in open protein knots with the subknots observed in the ideal configurations of the corresponding knot type. This comparison enables us to explain the specific dispositions of subknots in the analyzed protein knots.
Resumo:
Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years
Resumo:
Background: Asparagine N-Glycosylation is one of the most important forms of protein post-translational modification in eukaryotes. This metabolic pathway can be subdivided into two parts: an upstream sub-pathway required for achieving proper folding for most of the proteins synthesized in the secretory pathway, and a downstream sub-pathway required to give variability to trans-membrane proteins, and involved in adaptation to the environment andinnate immunity. Here we analyze the nucleotide variability of the genes of this pathway in human populations, identifying which genes show greater population differentiation and which genes show signatures of recent positive selection. We also compare how these signals are distributed between the upstream and the downstream parts of the pathway, with the aim of exploring how forces of population differentiation and positive selection vary among genes involved in the same metabolic pathway but subject to different functional constraints. Results:Our results show that genes in the downstream part of the pathway are more likely to show a signature of population differentiation, while events of positive selection are equally distributed among the two parts of the pathway. Moreover, events of positive selection arefrequent on genes that are known to be at bifurcation points, and that are identified as beingin key position by a network-level analysis such as MGAT3 and GCS1.Conclusions: These findings indicate that the upstream part of the Asparagine N-Glycosylation pathway has lower diversity among populations, while the downstream part is freer to tolerate diversity among populations. Moreover, the distribution of signatures of population differentiation and positive selection can change between parts of a pathway, especially between parts that are exposed to different functional constraints. Our results support the hypothesis that genes involved in constitutive processes can be expected to show lower population differentiation, while genes involved in traits related to the environment should show higher variability. Taken together, this work broadens our knowledge on how events of population differentiation and of positive selection are distributed among different parts of a metabolic pathway.
Resumo:
Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.
Resumo:
The purpose of Project ASSIST is to provide computer training to individuals who are blind, visually-impaired or deaf-blind. Our training materials address all levels of users, from beginners to advanced users. We have tutorials, keyboard guides and diagrams, and course packets. These materials can be used by individuals who want to learn popular computer programs on their own and by professional trainers for their organization's computer training program. We also offer instructor-led training through our ASSIST Online distance learning program.