928 resultados para essence of operation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Temporal logics targeting real-time systems are traditionally undecidable. Based on a restricted fragment of MTL-R, we propose a new approach for the runtime verification of hard real-time systems. The novelty of our technique is that it is based on incremental evaluation, allowing us to e↵ectively treat duration properties (which play a crucial role in real-time systems). We describe the two levels of operation of our approach: offline simplification by quantifier removal techniques; and online evaluation of a three-valued interpretation for formulas of our fragment. Our experiments show the applicability of this mechanism as well as the validity of the provided complexity results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The analysis of journalistic discourse and its social embeddedness has known significant advances in the last two decades, especially due to the emergence and development of Critical Discourse Analysis. However, three important aspects remain under-researched: the time plane in discourse analysis, the discursive strategies of social actors, and the extra- and supra-textual effects of mediated discourse. Firstly, understanding the biography of public matters requires a longitudinal examination of mediated texts and their social contexts but most forms of analysis of journalistic discourse do not account for the time sequence of texts and its implications. Secondly, as the media representation of social issues is, to a large extent, a function of the discursive construction of events, problems and positions by social actors, the discursive strategies that they employ in a variety of arenas and channels ‘‘before’’ and ‘‘after’’ journalistic texts need to be examined. Thirdly, the fact that many of the modes of operation of discourse are extra- or supra-textual calls for a consideration of various social processes ‘‘outside’’ the text. This paper aims to produce a theoretical and methodological contribution to the integration of these issues in discourse analysis by proposing a framework that combines a textual dimension with a contextual one

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a model predictive current control applied to a proposed single-phase five-level active rectifier (FLAR). This current control strategy uses the discrete-time nature of the active rectifier to define its state in each sampling interval. Although the switching frequency is not constant, this current control strategy allows to follow the reference with low total harmonic distortion (THDF). The implementation of the active rectifier that was used to obtain the experimental results is described in detail along the paper, presenting the circuit topology, the principle of operation, the power theory, and the current control strategy. The experimental results confirm the robustness and good performance (with low current THDF and controlled output voltage) of the proposed single-phase FLAR operating with model predictive current control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mesophilic Anaerobic Digestion treating sewage sludge was investigated at five full-scale sewage treatment plants in Ireland. The anaerobic digestion plants are compared and evaluated in terms of design, equipment, operation, monitoring and management. All digesters are cylindrical, gas mixed and heated Continuously Stirred Tank Reactors (CSTR), varying in size from 130m3 to 800m3. Heat exchanger systems heat all digesters. Three plants reported difficulties with the heating systems ranging from blockages to insufficient insulation and design. Exchangers were modified and replaced within one year of operation at two plants. All but one plant had Combined Heat and Power (CHP) systems installed. Parameter monitoring is a problem at all plants mainly due to a lack of staff and knowledge. The plant operators consider pH and temperature the most important parameters to be measured in terms of successful monitoring of an anaerobic digester. The short time taken and the ease at which pH and temperature can be measured may favour these parameters. Three laboratory scale pilot anaerobic digesters were operated using a variety of feeds over at 144-day period. Two of the pilots were unmixed and the third was mechanically mixed. As expected the unmixed reactors removed more COD by retention of solids in the digesters but also produced greater quantities of biogas than the mixed digester, especially when low solids feed such as whey was used. The mixed digester broke down more solids due to the superior contact between the substrate and the biomass. All three reactors showed good performance results for whey and sewage solids. Scum formation occurred giving operational problems for mixed and unmixed reactors when cattle slurry was used as the main feed source. The pilot test was also used to investigate which parameters were the best indicators of process instability. These trials clearly indicated that total Volatile Fatty Acid (VFA) concentrations was the best parameter to show signs of early process imbalance, while methane composition in the biogas was good to indicate possible nutrient deficiencies in the feed and oxygen shocks. pH was found to be a good process parameter only if the wastewater being treated produced low bicarbonate alkalinities during treatment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ID-Chagas test is a particle gel immunoassay (PaGIA). Red coloured particles are sensitised with three different synthetic peptides representing antigen sequences of Trypanosoma cruzi: Ag2, TcD and TcE. When these particles are mixed with serum containing specific antibodies, they agglutinate. The reaction mixture is centrifuged through a gel filtration matrix allowing free agglutinated particles to remain trapped on the top or distributed within the gel. The result can be read visually. In order to investigate the ability of the ID-PaGIA to discriminate negative and positive sera, 111 negative and 119 positive, collected in four different Brazilian institutions, were tested by each of the participants. All sera were previously classified as positive or negative according to results obtained with three conventional tests (indirect immunofluorescence, indirect hemaglutination, and enzime linked immunosorbent assay). Sensitivity rates of ID-PaGIA varied from 95.7% to 97.4% with mean sensitivity of 96.8% and specificity rates varied from 93.8 to 98.8% with mean specificity of 94.6%. The overall Kappa test was 0.94. The assay presents as advantages the simplicity of operation and the reaction time of 20 min. In this study, ID-PaGIA showed to be highly sensitive and specific.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The objective of the present study was to compare current results of prosthetic valve replacement following acute infective native valve endocarditis (NVE) with that of prosthetic valve endocarditis (PVE). Prosthetic valve replacement is often necessary for acute infective endocarditis. Although valve repair and homografts have been associated with excellent outcome, homograft availability and the importance of valvular destruction often dictate prosthetic valve replacement in patients with acute bacterial endocarditis. METHODS: A retrospective analysis of the experience with prosthetic valve replacement following acute NVE and PVE between 1988 and 1998 was performed at the Montreal Heart Institute. RESULTS: Seventy-seven patients (57 men and 20 women, mean age 48 +/- 16 years) with acute infective endocarditis underwent valve replacement. Fifty patients had NVE and 27 had PVE. Four patients (8%) with NVE died within 30 days of operation and there were no hospital deaths in patients with PVE. Survival at 1, 5, and 7 years averaged 80% +/- 6%, 76% +/- 6%, and 76% +/- 6% for NVE and 70% +/- 9%, 59% +/- 10%, and 55% +/- 10% for PVE, respectively (p = 0.15). Reoperation-free survival at 1, 5, and 7 years averaged 80% +/- 6%, 76% +/- 6%, and 76% +/- 6% for NVE and 45% +/- 10%, 40% +/- 10%, and 36% +/- 9% for PVE (p = 0.003). Five-year survival for NVE averaged 75% +/- 9% following aortic valve replacement and 79% +/- 9% following mitral valve replacement. Five-year survival for PVE averaged 66% +/- 12% following aortic valve replacement and 43% +/- 19% following mitral valve replacement (p = 0.75). Nine patients underwent reoperation during follow-up: indications were prosthesis infection in 4 patients (3 mitral, 1 aortic), dehiscence of mitral prosthesis in 3, and dehiscence of aortic prosthesis in 2. CONCLUSIONS: Prosthetic valve replacement for NVE resulted in good long-term patient survival with a minimal risk of reoperation compared with patients who underwent valve replacement for PVE. In patients with PVE, those who needed reoperation had recurrent endocarditis or noninfectious periprosthetic dehiscence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the decade of the 1990s, China’s feed sector became increasingly privatized, more feed mills opened, and the scale of operation expanded. Capacity utilization remained low and multi-ministerial supervision was still prevalent, but the feed mill sector showed a positive performance overall, posting a growth rate of 11 percent per year. Profit margin over sales was within allowable rates set by the government of China at 3 to 5 percent. Financial efficiency improved, with a 20 percent quicker turnover of working capital. Average technical efficiency was 0.805, as more efficient feed mills increasingly gained production shares. This study finds evidence that the increasing privatization explains the improved performance of the commercial feed mill sector. The drivers that shaped the feed mill sector in the 1990s have changed with China’s accession to the World Trade Organization. With the new policy regime in place, the study foresees that, assuming an adequate supply of soy meal and an excess capacity in the feed mill sector, it is likely that China will allow corn imports up to the tariff rate quota (TRQ) of 7.2 mmt since the in-quota rate is very low at 1 percent. However, when the TRQ is exceeded, the import duty jumps to a prohibitive out-quota rate of 65 percent. With an import duty for meat of only 10 to 12 percent, China would have a strong incentive to import meat products directly rather than bringing in expensive corn to produce meat domestically. This would be further reinforced if structural transformation in the swine sector would narrow the cost differential between domestic and imported pork.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this paper is to examine the various controversies over the genuineproblems of toleration in a plurally diverse polity as both historically andconceptually, toleration is one of the foundational characteristics that defines thevery essence of a plurally diverse polity and the basic virtue associated with a liberalconception of citizenship. In section 1, I present the main philosophical andconceptual issues related to the toleration-based approach to diversity in liberalpolitical theory. In section 2 I identify the conditions and the circumstances oftoleration. I articulate in Section 3 the most pressing objections against toleration. Ipresent in section 4 two competing approaches to the toleration-based approach todiversity is faced with. In the concluding section, I outline a modified conception oftoleration that mediates between different requirements associated with the twoprincipled commitments of the liberal version of the rights-based conception ofcitizenship.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To compare entero-MDCT with entero-MRI performed for suspicion of acute exacerbation of known Crohn's disease. Methods and Materials: Fifty-seven patients (mean age 33.5) with histologically proven Crohn's disease were prospectively included. They presented with clinical symptoms suggesting acute exacerbation to the emergency department. After oral administration of 1-2 l of 5% methylcellulosis (+syrup), entero-MDCT and entero- MRI were performed on each patient (mean delay 1 day). Three experienced radiologists blindly and independently evaluated each examination for technical quality, eight pathological CT features (bowel wall thickening, pathological wall enhancement, stenosis, lymphadenopathy, mesenteric haziness, intraperitoneal fluid, abscess, fistula) and final main diagnosis. Interobserver agreement kappa was calculated. Sensitivity and specificity resulted from comparison with the reference standard, consisting of operation (n= 30) and long-time follow-up in case of conservative treatment (n=27). Results: Entero-MDCT demonstrated considerably less artefacts than entero-MRI (p 0.0001). In 9 entero-MDCT/-MRI, no activity of Crohn's disease was seen, whereas in 48 entero-MDCT/-MRI active disease could be demonstrated, such as intraperitoneal abscesses (n=11), fistulas (n=13), stenoses (n=23), acute (n=15) or chronic (n=23) inflammation. Interobserver agreement of the three readers was not significantly different between entero-MDCT and -MRI, neither was sensitivity (range 60-89%) and specificity (range 75-100%) for each of the eight pathological features or for the main diagnosis. Conclusion: Entero-MRI is statistically of similar diagnostic value as entero-MDCT for acute complications of Crohn's disease. Therefore, entero-IRM, devoid of harmful irradiation, should become the preferred imaging modality, since we deal with young patients, very likely exposed to frequent imaging controls in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

1. Background¦Adenocarcinomas of the pancreas are exocrine tumors, originate from ductal system, including two morphologically distinct entities: the ductal adenocarcinoma and mucinous adenocarcinoma. Ductal adenocarcinoma is by far the most frequent malignant tumor in the pancreas, representing at least about 90% of all pancreas cancers. It is associated with very poor prognosis, due to the fact that actually there are no any biological markers or diagnostic tools for identification of the disease at an early stage. Most of the time the disease is extensive with vascular and nerves involvement or with metastatic spread at the time of diagnosis (1). The median survival is less than 5% at 5 years, placing it, at the fifth leading cause of death by cancer in the world (2). The mucinous form of pancreatic adenocarcinoma is less frequent, and seems to have a better prognosis with about 57% survival at 5 years (1)(3)(4).¦Each morphologic type of pancreatic adenocarcinoma is associated with particular preneoplastic lesions. Two types of preneoplastic lesions are described: firstly, pancreatic intra-epithelial neoplasia (PanIN) which affects the small and peripheral pancreatic ducts, and the intraductal papillary-mucinous neoplasm (IPMN) interested the main pancreatic ducts and its principal branches. Both of preneoplastic lesions lead by different mechanisms to the pancreatic adenocarcinoma (1)(2)(3)(4)(5)(6)(7)(8)(9)(10).¦The purpose of our study consists in a retrospective analysis of various clinical and histo-morphological parameters in order to assess a difference in survival between these two morphological types of pancreatic adenocarcinomas.¦1.2 Material and methods¦We conducted a retrospective analysis including 35 patients, (20 men and 15 women), beneficed the surgical treatment for pancreas adenocarcinoma at the Surgical Department of University Hospital in Lausanne. The patients involved in our study have been treated between 2003 and 2008, permitting at least 5-years mean follow up. For each patient the following parameters were analysed: age, gender, type of operation, type of preneoplastic lesions, TNM stage, histological grade of the tumor, vascular invasion, lymphatic and perineural invasion, resection margins, and adjuvant treatment.¦The results from these observations were included in a univariate and multivariate statistical analysis and compared with overall survival, as well as specific survival for each morphologic subtype of adenocarcinoma.¦As a low number of mucinous adenocarcinomas (n=5) was insufficient to conduct a pertinent statistical analysis, we compared the data obtained from adenocarcinomas developed on PanIN with adenocarcinomas developed on IPMN including both, ductal or mucinous types.¦1.3 Result¦Our results show that adenocarcinomas developed on pre-existing IPMN including both morphologic types (ductal and mucinous form) are associated with a better survival and prognosis than adenocarciomas developed on PanIN.¦1.4 Conclusion¦This study reflects that the most relevant parameter in survival in pancreatic adenocarcinoma seems to be the type of preneoplastic lesion. The significant difference in survival was noted between adenocarcinomas developing on PanIN as compared to adenocarcinomas developed on IPMN precursor lesions. Ductal adenocarcinomas developped on IPMN present significantly longer survival than those developed on PanIN lesions (P value= 0,01). Therefore we can suggest that the histological type of preneoplastic lesion rather than the histological type of adenocarcinoma should be the determinant prognosis factor in survival of pancreatic adenocarcinoma.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increased production of reactive oxygen species (ROS) in mitochondria underlies major systemic diseases, and this clinical problem stimulates a great scientific interest in the mechanism of ROS generation. However, the mechanism of hypoxia-induced change in ROS production is not fully understood. To mathematically analyze this mechanism in details, taking into consideration all the possible redox states formed in the process of electron transport, even for respiratory complex III, a system of hundreds of differential equations must be constructed. Aimed to facilitate such tasks, we developed a new methodology of modeling, which resides in the automated construction of large sets of differential equations. The detailed modeling of electron transport in mitochondria allowed for the identification of two steady state modes of operation (bistability) of respiratory complex III at the same microenvironmental conditions. Various perturbations could induce the transition of respiratory chain from one steady state to another. While normally complex III is in a low ROS producing mode, temporal anoxia could switch it to a high ROS producing state, which persists after the return to normal oxygen supply. This prediction, which we qualitatively validated experimentally, explains the mechanism of anoxia-induced cell damage. Recognition of bistability of complex III operation may enable novel therapeutic strategies for oxidative stress and our method of modeling could be widely used in systems biology studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: This study is a single-institution validation of video-assisted thoracoscopic (VATS) resection of a small solitary pulmonary nodule (SPN) previously localized by a CT-guided hook-wire system in a consecutive series of 45 patients. METHODS: The records of all patients undergoing VATS resection for SPN preoperatively localized by CT-guided a hook-wire system from January 2002 to December 2004 were assessed with respect to failure to localize the lesion by the hook-wire system, conversion thoracotomy rate, duration of operation, postoperative complications, and histology of SPN. RESULTS: Forty-five patients underwent 49 VATS resections, with simultaneous bilateral SPN resection performed in 4. Preoperative CT-guided hook-wire localization failed in two patients (4%). Conversion thoracotomy was necessary in two patients (4%) because it was not possible to resect the lesion by a VATS approach. The average operative time was 50 min. Postoperative complications occurred in 3 patients (6%), one hemothorax and two pneumonia. The mean hospital stay was 5 days (range: 2-18 days). Histological assessment revealed inflammatory disease in 17 patients (38%), metastasis in 17 (38%), non-small-cell lung cancer (NSCLC) in 4 (9%), lymphoma in 3 (6%), interstitial fibrosis in 2 (4%), histiocytoma in one (2%), and hamartoma in one (2%). CONCLUSIONS: Histological analysis of resected SPN revealed unexpected malignant disease in more than 50% of the patients indicating that histological clarification of SPN seems warranted. Video-assisted thoracoscopic resection of SPN previously localized by a CT-guided hook-wire system is related to a low conversion thoracotomy rate, a short operation time, and few postoperative complications, and it is well suited for the clarification of SPN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A laser-based technique for printing transparent and weakly absorbing liquids is developed. Its principle of operation relies in the tight focusing of short laser pulses inside the liquid and close to its free surface, in such a way that the laser radiation is absorbed in a tiny volume around the beam waist, with practically no absorption in any other location along the beam path. If the absorbed energy overcomes the optical breakdown threshold, a cavitation bubble is generated, and its expansion results in the propulsion of a small fraction of liquid which can be collected on a substrate, leading to the printing of a microdroplet for each laser pulse. The technique does not require the preparation of the liquid in thin film form, and its forward mode of operation imposes no restriction concerning the optical properties of the substrate. These characteristics make it well suited for printing a wide variety of materials of interest in diverse applications. We demonstrate that the film-free laser forward printing technique is capable of printing microdroplets with good resolution, reproducibility and control, and analyze the influence of the main process parameter, laser pulse energy. The mechanisms of liquid printing are also investigated: time-resolved imaging provides a clear picture of the dynamics of liquid transfer which allows understanding the main features observed in the printed droplets.