965 resultados para Force-based finite elements
Resumo:
The atomic force microscope is a convenient tool to probe living samples at the nanometric scale. Among its numerous capabilities, the instrument can be operated as a nano-indenter to gather information about the mechanical properties of the sample. In this operating mode, the deformation of the cantilever is displayed as a function of the indentation depth of the tip into the sample. Fitting this curve with different theoretical models permits us to estimate the Young's modulus of the sample at the indentation spot. We describe what to our knowledge is a new technique to process these curves to distinguish structures of different stiffness buried into the bulk of the sample. The working principle of this new imaging technique has been verified by finite element models and successfully applied to living cells.
Resumo:
Large Dynamic Message Signs (DMSs) have been increasingly used on freeways, expressways and major arterials to better manage the traffic flow by providing accurate and timely information to drivers. Overhead truss structures are typically employed to support those DMSs allowing them to provide wider display to more lanes. In recent years, there is increasing evidence that the truss structures supporting these large and heavy signs are subjected to much more complex loadings than are typically accounted for in the codified design procedures. Consequently, some of these structures have required frequent inspections, retrofitting, and even premature replacement. Two manufacturing processes are primarily utilized on truss structures - welding and bolting. Recently, cracks at welding toes were reported for the structures employed in some states. Extremely large loads (e.g., due to high winds) could cause brittle fractures, and cyclic vibration (e.g., due to diurnal variation in temperature or due to oscillations in the wind force induced by vortex shedding behind the DMS) may lead to fatigue damage, as these are two major failures for the metallic material. Wind and strain resulting from temperature changes are the main loads that affect the structures during their lifetime. The American Association of State Highway and Transportation Officials (AASHTO) Specification defines the limit loads in dead load, wind load, ice load, and fatigue design for natural wind gust and truck-induced gust. The objectives of this study are to investigate wind and thermal effects in the bridge type overhead DMS truss structures and improve the current design specifications (e.g., for thermal design). In order to accomplish the objective, it is necessary to study structural behavior and detailed strain-stress of the truss structures caused by wind load on the DMS cabinet and thermal load on the truss supporting the DMS cabinet. The study is divided into two parts. The Computational Fluid Dynamics (CFD) component and part of the structural analysis component of the study were conducted at the University of Iowa while the field study and related structural analysis computations were conducted at the Iowa State University. The CFD simulations were used to determine the air-induced forces (wind loads) on the DMS cabinets and the finite element analysis was used to determine the response of the supporting trusses to these pressure forces. The field observation portion consisted of short-term monitoring of several DMS Cabinet/Trusses and long-term monitoring of one DMS Cabinet/Truss. The short-term monitoring was a single (or two) day event in which several message sign panel/trusses were tested. The long-term monitoring field study extended over several months. Analysis of the data focused on trying to identify important behaviors under both ambient and truck induced winds and the effect of daily temperature changes. Results of the CFD investigation, field experiments and structural analysis of the wind induced forces on the DMS cabinets and their effect on the supporting trusses showed that the passage of trucks cannot be responsible for the problems observed to develop at trusses supporting DMS cabinets. Rather the data pointed toward the important effect of the thermal load induced by cyclic (diurnal) variations of the temperature. Thermal influence is not discussed in the specification, either in limit load or fatigue design. Although the frequency of the thermal load is low, results showed that when temperature range is large the restress range would be significant to the structure, especially near welding areas where stress concentrations may occur. Moreover stress amplitude and range are the primary parameters for brittle fracture and fatigue life estimation. Long-term field monitoring of one of the overhead truss structures in Iowa was used as the research baseline to estimate the effects of diurnal temperature changes to fatigue damage. The evaluation of the collected data is an important approach for understanding the structural behavior and for the advancement of future code provisions. Finite element modeling was developed to estimate the strain and stress magnitudes, which were compared with the field monitoring data. Fatigue life of the truss structures was also estimated based on AASHTO specifications and the numerical modeling. The main conclusion of the study is that thermal induced fatigue damage of the truss structures supporting DMS cabinets is likely a significant contributing cause for the cracks observed to develop at such structures. Other probable causes for fatigue damage not investigated in this study are the cyclic oscillations of the total wind load associated with the vortex shedding behind the DMS cabinet at high wind conditions and fabrication tolerances and induced stresses due to fitting of tube to tube connections.
Resumo:
The aim of this article is to estimate the impact of various factors related to role conflict theory and preference theory on the reduction of women's labour force participation after their transition to parenthood. Objective and subjective dimensions of women's labour force participation are assessed. The empirical test is based on a survey of couples with children in Switzerland. Results show that compared to structural factors associated with role conflict reduction, preferences have little impact on mothers' labour force participation, but explain a good deal of their frustration if the factual situation does not correspond to their wishes. Structural factors, such as occupation, economic resources, childcare, and an urban environment, support mothers' labour force participation, whereas active networks and a home centred lifestyle preference help them to cope with frustrations.
Resumo:
Cell motility is an essential process that depends on a coherent, cross-linked actin cytoskeleton that physically coordinates the actions of numerous structural and signaling molecules. The actin cross-linking protein, filamin (Fln), has been implicated in the support of three-dimensional cortical actin networks capable of both maintaining cellular integrity and withstanding large forces. Although numerous studies have examined cells lacking one of the multiple Fln isoforms, compensatory mechanisms can mask novel phenotypes only observable by further Fln depletion. Indeed, shRNA-mediated knockdown of FlnA in FlnB¿/¿ mouse embryonic fibroblasts (MEFs) causes a novel endoplasmic spreading deficiency as detected by endoplasmic reticulum markers. Microtubule (MT) extension rates are also decreased but not by peripheral actin flow, because this is also decreased in the Fln-depleted system. Additionally, Fln-depleted MEFs exhibit decreased adhesion stability that appears in increased ruffling of the cell edge, reduced adhesion size, transient traction forces, and decreased stress fibers. FlnA¿/¿ MEFs, but not FlnB¿/¿ MEFs, also show a moderate defect in endoplasm spreading, characterized by initial extension followed by abrupt retractions and stress fiber fracture. FlnA localizes to actin linkages surrounding the endoplasm, adhesions, and stress fibers. Thus we suggest that Flns have a major role in the maintenance of actin-based mechanical linkages that enable endoplasmic spreading and MT extension as well as sustained traction forces and mature focal adhesions.
Resumo:
Anticoagulants are a mainstay of cardiovascular therapy, and parenteral anticoagulants have widespread use in cardiology, especially in acute situations. Parenteral anticoagulants include unfractionated heparin, low-molecular-weight heparins, the synthetic pentasaccharides fondaparinux, idraparinux and idrabiotaparinux, and parenteral direct thrombin inhibitors. The several shortcomings of unfractionated heparin and of low-molecular-weight heparins have prompted the development of the other newer agents. Here we review the mechanisms of action, pharmacological properties and side effects of parenteral anticoagulants used in the management of coronary heart disease treated with or without percutaneous coronary interventions, cardioversion for atrial fibrillation, and prosthetic heart valves and valve repair. Using an evidence-based approach, we describe the results of completed clinical trials, highlight ongoing research with currently available agents, and recommend therapeutic options for specific heart diseases.
Resumo:
Microsatellite loci mutate at an extremely high rate and are generally thought to evolve through a stepwise mutation model. Several differentiation statistics taking into account the particular mutation scheme of the microsatellite have been proposed. The most commonly used is R(ST) which is independent of the mutation rate under a generalized stepwise mutation model. F(ST) and R(ST) are commonly reported in the literature, but often differ widely. Here we compare their statistical performances using individual-based simulations of a finite island model. The simulations were run under different levels of gene flow, mutation rates, population number and sizes. In addition to the per locus statistical properties, we compare two ways of combining R(ST) over loci. Our simulations show that even under a strict stepwise mutation model, no statistic is best overall. All estimators suffer to different extents from large bias and variance. While R(ST) better reflects population differentiation in populations characterized by very low gene-exchange, F(ST) gives better estimates in cases of high levels of gene flow. The number of loci sampled (12, 24, or 96) has only a minor effect on the relative performance of the estimators under study. For all estimators there is a striking effect of the number of samples, with the differentiation estimates showing very odd distributions for two samples.
Resumo:
Ecological parameters vary in space, and the resulting heterogeneity of selective forces can drive adaptive population divergence. Clinal variation represents a classical model to study the interplay of gene flow and selection in the dynamics of this local adaptation process. Although geographic variation in phenotypic traits in discrete populations could be remainders of past adaptation, maintenance of adaptive clinal variation requires recurrent selection. Clinal variation in genetically determined traits is generally attributed to adaptation of different genotypes to local conditions along an environmental gradient, although it can as well arise from neutral processes. Here, we investigated whether selection accounts for the strong clinal variation observed in a highly heritable pheomelanin-based color trait in the European barn owl by comparing spatial differentiation of color and of neutral genes among populations. Barn owl's coloration varies continuously from white in southwestern Europe to reddish-brown in northeastern Europe. A very low differentiation at neutral genetic markers suggests that substantial gene flow occurs among populations. The persistence of pronounced color differentiation despite this strong gene flow is consistent with the hypothesis that selection is the primary force maintaining color variation among European populations. Therefore, the color cline is most likely the result of local adaptation.
Resumo:
This paper introduces a mixture model based on the beta distribution, without preestablishedmeans and variances, to analyze a large set of Beauty-Contest data obtainedfrom diverse groups of experiments (Bosch-Domenech et al. 2002). This model gives a bettert of the experimental data, and more precision to the hypothesis that a large proportionof individuals follow a common pattern of reasoning, described as iterated best reply (degenerate),than mixture models based on the normal distribution. The analysis shows thatthe means of the distributions across the groups of experiments are pretty stable, while theproportions of choices at dierent levels of reasoning vary across groups.
Resumo:
EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.
Resumo:
Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.
Resumo:
Azole resistance in Candida albicans can be mediated by the upregulation of the ATP binding cassette transporter genes CDR1 and CDR2. Both genes are regulated by a cis-acting element called the drug-responsive element (DRE), with the consensus sequence 5'-CGGAWATCGGATATTTTTTT-3', and the transcription factor Tac1p. In order to analyze in detail the DRE sequence necessary for the regulation of CDR1 and CDR2 and properties of TAC1 alleles, a one-hybrid system was designed. This system is based on a P((CDR2))-HIS3 reporter system in which complementation of histidine auxotrophy can be monitored by activation of the reporter system by CDR2-inducing drugs such as estradiol. Our results show that most of the modifications within the DRE, but especially at the level of CGG triplets, strongly reduce CDR2 expression. The CDR2 DRE was replaced by putative DREs deduced from promoters of coregulated genes (CDR1, RTA3, and IFU5). Surprisingly, even if Tac1p was able to bind these putative DREs, as shown by chromatin immunoprecipitation, those from RTA3 and IFU5 did not functionally replace the CDR2 DRE. The one-hybrid system was also used for the identification of gain-of-function (GOF) mutations either in TAC1 alleles from clinical C. albicans isolates or inserted in TAC1 wild-type alleles by random mutagenesis. In all, 17 different GOF mutations were identified at 13 distinct positions. Five of them (G980E, N972D, A736V, T225A, and N977D) have already been described in clinical isolates, and four others (G980W, A736T, N972S, and N972I) occurred at already-described positions, thus suggesting that GOF mutations can occur in a limited number of positions in Tac1p. In conclusion, the one-hybrid system developed here is rapid and powerful and can be used for characterization of cis- and trans-acting elements in C. albicans.
On the evolution of harming and recognition in finite panmictic and infinite structured populations.
Resumo:
Natural selection may favor two very different types of social behaviors that have costs in vital rates (fecundity and/or survival) to the actor: helping behaviors, which increase the vital rates of recipients, and harming behaviors, which reduce the vital rates of recipients. Although social evolutionary theory has mainly dealt with helping behaviors, competition for limited resources creates ecological conditions in which an actor may benefit from expressing behaviors that reduce the vital rates of neighbors. This may occur if the reduction in vital rates decreases the intensity of competition experienced by the actor or that experienced by its offspring. Here, we explore the joint evolution of neutral recognition markers and marker-based costly conditional harming whereby actors express harming, conditional on actor and recipient bearing different conspicuous markers. We do so for two complementary demographic scenarios: finite panmictic and infinite structured populations. We find that marker-based conditional harming can evolve under a large range of recombination rates and group sizes under both finite panmictic and infinite structured populations. A direct comparison with results for the evolution of marker-based conditional helping reveals that, if everything else is equal, marker-based conditional harming is often more likely to evolve than marker-based conditional helping.
Resumo:
Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.
Resumo:
The estimation of muscle forces in musculoskeletal shoulder models is still controversial. Two different methods are widely used to solve the indeterminacy of the system: electromyography (EMG)-based methods and stress-based methods. The goal of this work was to evaluate the influence of these two methods on the prediction of muscle forces, glenohumeral load and joint stability after total shoulder arthroplasty. An EMG-based and a stress-based method were implemented into the same musculoskeletal shoulder model. The model replicated the glenohumeral joint after total shoulder arthroplasty. It contained the scapula, the humerus, the joint prosthesis, the rotator cuff muscles supraspinatus, subscapularis and infraspinatus and the middle, anterior and posterior deltoid muscles. A movement of abduction was simulated in the plane of the scapula. The EMG-based method replicated muscular activity of experimentally measured EMG. The stress-based method minimised a cost function based on muscle stresses. We compared muscle forces, joint reaction force, articular contact pressure and translation of the humeral head. The stress-based method predicted a lower force of the rotator cuff muscles. This was partly counter-balanced by a higher force of the middle part of the deltoid muscle. As a consequence, the stress-based method predicted a lower joint load (16% reduced) and a higher superior-inferior translation of the humeral head (increased by 1.2 mm). The EMG-based method has the advantage of replicating the observed cocontraction of stabilising muscles of the rotator cuff. This method is, however, limited to available EMG measurements. The stress-based method has thus an advantage of flexibility, but may overestimate glenohumeral subluxation.
Resumo:
The inverse scattering problem concerning the determination of the joint time-delayDoppler-scale reflectivity density characterizing continuous target environments is addressed by recourse to the generalized frame theory. A reconstruction formula,involving the echoes of a frame of outgoing signals and its corresponding reciprocalframe, is developed. A ‘‘realistic’’ situation with respect to the transmission ofa finite number of signals is further considered. In such a case, our reconstruction formula is shown to yield the orthogonal projection of the reflectivity density onto a subspace generated by the transmitted signals.