981 resultados para Robustness Analysis
Resumo:
Manipulator systems are rather complex and highly nonlinear which makes difficult their analysis and control. Classic system theory is veil known, however it is inadequate in the presence of strong nonlinear dynamics. Nonlinear controllers produce good results [1] and work has been done e. g. relating the manipulator nonlinear dynamics with frequency response [2–5]. Nevertheless, given the complexity of the problem, systematic methods which permit to draw conclusions about stability, imperfect modelling effects, compensation requirements, etc. are still lacking. In section 2 we start by analysing the variation of the poles and zeros of the descriptive transfer functions of a robot manipulator in order to motivate the development of more robust (and computationally efficient) control algorithms. Based on this analysis a new multirate controller which is an improvement of the well known “computed torque controller” [6] is announced in section 3. Some research in this area was done by Neuman [7,8] showing tbat better robustness is possible if the basic controller structure is modified. The present study stems from those ideas, and attempts to give a systematic treatment, which results in easy to use standard engineering tools. Finally, in section 4 conclusions are presented.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
The paper presented herein proposes a reliability-based framework for quantifying the structural robustness considering the occurrence of a major earthquake (mainshock) and subsequent cascading hazard events, such as aftershocks that are triggered by the mainshock. These events can significantly increase the probability of failure of buildings, especially for structures that are damaged during the mainshock. The application of the proposed framework is exemplified through three numerical case studies. The case studies correspond to three SAC steel moment frame buildings of 3-, 9-, and 20- stories, which were designed to pre-Northridge codes and standards. Twodimensional nonlinear finite element models of the buildings are developed using the Open System for Earthquake Engineering Simulation framework (OpenSees), using a finite-length plastic hinge beam model and a bilinear constitutive law with deterioration, and are subjected to multiple mainshock-aftershock seismic sequences. For the three buildings analyzed herein, it is shown that the structural reliability under a single seismic event can be significantly different from that under a sequence of seismic events. The reliability-based robustness indicator used shows that the structural robustness is influenced by the extent by which a structure can distribute damage.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Civil
Resumo:
Cyanobacteria are photoautotrophic microorganisms with great potential for the biotechnological industry due to their low nutrient requirements, photosynthetic capacities and metabolic plasticity. In biotechnology, the energy sector is one of the main targets for their utilization, especially to produce the so called third generation biofuels, which are regarded as one of the best replacements for petroleum-based fuels. Although, several issues could be solved, others arise from the use of cyanobacteria, namely the need for high amounts of freshwater and contamination/predation by other microorganisms that affect cultivation efficiencies. The cultivation of cyanobacteria in seawater could solve this issue, since it has a very stable and rich chemical composition. Among cyanobacteria, the model microorganism Synechocystis sp. PCC 6803 is one of the most studied with its genome fully sequenced and genomic, transcriptomic and proteomic data available to better predict its phenotypic behaviors/characteristics. Despite suitable for genetic engineering and implementation as a microbial cell factory, Synechocystis’ growth rate is negatively affected by increasing salinity levels. Therefore, it is important to improve. To achieve this, several strategies involving the constitutive overexpression of the native genes encoding the proteins involved in the production of the compatible solute glucosylglycerol were implemented, following synthetic biology principles. A preliminary transcription analysis of selected mutants revealed that the assembled synthetic devices are functional at the transcriptional level. However, under different salinities, the mutants did not show improved robustness to salinity in terms of growth, compared with the wild-type. Nevertheless, some mutants carrying synthetic devices appear to have a better physiological response under seawater’s NaCl concentration than in 0% (w/v) NaCl.
Resumo:
Dissertação de mestrado em Bioinformática
Resumo:
The weak selection approximation of population genetics has made possible the analysis of social evolution under a considerable variety of biological scenarios. Despite its extensive usage, the accuracy of weak selection in predicting the emergence of altruism under limited dispersal when selection intensity increases remains unclear. Here, we derive the condition for the spread of an altruistic mutant in the infinite island model of dispersal under a Moran reproductive process and arbitrary strength of selection. The simplicity of the model allows us to compare weak and strong selection regimes analytically. Our results demonstrate that the weak selection approximation is robust to moderate increases in selection intensity and therefore provides a good approximation to understand the invasion of altruism in spatially structured population. In particular, we find that the weak selection approximation is excellent even if selection is very strong, when either migration is much stronger than selection or when patches are large. Importantly, we emphasize that the weak selection approximation provides the ideal condition for the invasion of altruism, and increasing selection intensity will impede the emergence of altruism. We discuss that this should also hold for more complicated life cycles and for culturally transmitted altruism. Using the weak selection approximation is therefore unlikely to miss out on any demographic scenario that lead to the evolution of altruism under limited dispersal.
Resumo:
Colorectal cancer is one of the most prevalent cancers in developed countries. However, the genetic factors influencing its appearance remain far from being fully characterized. Recently, a G>A functional transition mapping the 3' untranslated region of the CXCL12 gene (rs1801157) has been found to be under-represented among rectal cancer patients when compared to colon cancer patients from a Swedish series. Here we present the results from an independent analysis of CXCL12 rs1801157 in a larger CRC series of Spanish origin in order to analyse the robustness of this association within a different European population. No significant difference was observed between controls and colon or rectal cancer patients. We were also unable to find a correlation between rs1801157 and different prognostic markers such as metastasis development or disease-free survival time. The epidemiologic data involving CXCL12 rs1801157 in colorectal cancer risk are discussed.
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
Acute brain slices are slices of brain tissue that are kept vital in vitro for further recordings and analyses. This tool is of major importance in neurobiology and allows the study of brain cells such as microglia, astrocytes, neurons and their inter/intracellular communications via ion channels or transporters. In combination with light/fluorescence microscopies, acute brain slices enable the ex vivo analysis of specific cells or groups of cells inside the slice, e.g. astrocytes. To bridge ex vivo knowledge of a cell with its ultrastructure, we developed a correlative microscopy approach for acute brain slices. The workflow begins with sampling of the tissue and precise trimming of a region of interest, which contains GFP-tagged astrocytes that can be visualised by fluorescence microscopy of ultrathin sections. The astrocytes and their surroundings are then analysed by high resolution scanning transmission electron microscopy (STEM). An important aspect of this workflow is the modification of a commercial cryo-ultramicrotome to observe the fluorescent GFP signal during the trimming process. It ensured that sections contained at least one GFP astrocyte. After cryo-sectioning, a map of the GFP-expressing astrocytes is established and transferred to correlation software installed on a focused ion beam scanning electron microscope equipped with a STEM detector. Next, the areas displaying fluorescence are selected for high resolution STEM imaging. An overview area (e.g. a whole mesh of the grid) is imaged with an automated tiling and stitching process. In the final stitched image, the local organisation of the brain tissue can be surveyed or areas of interest can be magnified to observe fine details, e.g. vesicles or gold labels on specific proteins. The robustness of this workflow is contingent on the quality of sample preparation, based on Tokuyasu's protocol. This method results in a reasonable compromise between preservation of morphology and maintenance of antigenicity. Finally, an important feature of this approach is that the fluorescence of the GFP signal is preserved throughout the entire preparation process until the last step before electron microscopy.
Resumo:
A major issue in the application of waveform inversion methods to crosshole georadar data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a time-domain waveform inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little-to-no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
Introduction: With the setting up of the newly Athlete's Biological Passport antidoping programme, novel guidelines have been introduced to guarantee results beyond reproach. We investigated in this context, the effect of storage time on the variables commonly measured for the haematological passport. We also wanted to assess for these variables, the within and between analyzer variations. Methods: Blood samples were obtained from top level male professional cyclists (27 samples for the first part of the study and 102 for the second part) taking part to major stage races. After collection, they were transported under refrigerated conditions (2 °C < T < 12 °C), delivered to the antidoping laboratory, analysed and then stored at approximately 4 °C to conduct analysis at different time points up to 72 h after delivery. A mixed-model procedure was used to determine the stability of the different variables. Results: As expected haemoglobin concentration was not affected by storage and showed stability for at least 72 h. Under the conditions of our investigation, the reticulocytes percentage showed a much better stability than previous published data (> 48 h) and the technical comparison of the haematology analyzer demonstrated excellent results. Conclusion: In conclusion, our data clearly demonstrate that as long as the World Anti-Doping Agency's guidelines are followed rigorously, all blood results reach the quality level required in the antidoping context.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Gene set enrichment (GSE) analysis is a popular framework for condensing information from gene expression profiles into a pathway or signature summary. The strengths of this approach over single gene analysis include noise and dimension reduction, as well as greater biological interpretability. As molecular profiling experiments move beyond simple case-control studies, robust and flexible GSE methodologies are needed that can model pathway activity within highly heterogeneous data sets. To address this challenge, we introduce Gene Set Variation Analysis (GSVA), a GSE method that estimates variation of pathway activity over a sample population in an unsupervised manner. We demonstrate the robustness of GSVA in a comparison with current state of the art sample-wise enrichment methods. Further, we provide examples of its utility in differential pathway activity and survival analysis. Lastly, we show how GSVA works analogously with data from both microarray and RNA-seq experiments. GSVA provides increased power to detect subtle pathway activity changes over a sample population in comparison to corresponding methods. While GSE methods are generally regarded as end points of a bioinformatic analysis, GSVA constitutes a starting point to build pathway-centric models of biology. Moreover, GSVA contributes to the current need of GSE methods for RNA-seq data. GSVA is an open source software package for R which forms part of the Bioconductor project and can be downloaded at http://www.bioconductor.org.
Resumo:
Within a developing organism, cells require information on where they are in order to differentiate into the correct cell-type. Pattern formation is the process by which cells acquire and process positional cues and thus determine their fate. This can be achieved by the production and release of a diffusible signaling molecule, called a morphogen, which forms a concentration gradient: exposure to different morphogen levels leads to the activation of specific signaling pathways. Thus, in response to the morphogen gradient, cells start to express different sets of genes, forming domains characterized by a unique combination of differentially expressed genes. As a result, a pattern of cell fates and specification emerges.Though morphogens have been known for decades, it is not yet clear how these gradients form and are interpreted in order to yield highly robust patterns of gene expression. During my PhD thesis, I investigated the properties of Bicoid (Bcd) and Decapentaplegic (Dpp), two morphogens involved in the patterning of the anterior-posterior axis of Drosophila embryo and wing primordium, respectively. In particular, I have been interested in understanding how the pattern proportions are maintained across embryos of different sizes or within a growing tissue. This property is commonly referred to as scaling and is essential for yielding functional organs or organisms. In order to tackle these questions, I analysed fluorescence images showing the pattern of gene expression domains in the early embryo and wing imaginal disc. After characterizing the extent of these domains in a quantitative and systematic manner, I introduced and applied a new scaling measure in order to assess how well proportions are maintained. I found that scaling emerged as a universal property both in early embryos (at least far away from the Bcd source) and in wing imaginal discs (across different developmental stages). Since we were also interested in understanding the mechanisms underlying scaling and how it is transmitted from the morphogen to the target genes down in the signaling cascade, I also quantified scaling in mutant flies where this property could be disrupted. While scaling is largely conserved in embryos with altered bcd dosage, my modeling suggests that Bcd trapping by the nuclei as well as pre-steady state decoding of the morphogen gradient are essential to ensure precise and scaled patterning of the Bcd signaling cascade. In the wing imaginal disc, it appears that as the disc grows, the Dpp response expands and scales with the tissue size. Interestingly, scaling is not perfect at all positions in the field. The scaling of the target gene domains is best where they have a function; Spalt, for example, scales best at the position in the anterior compartment where it helps to form one of the anterior veins of the wing. Analysis of mutants for pentagone, a transcriptional target of Dpp that encodes a secreted feedback regulator of the pathway, indicates that Pentagone plays a key role in scaling the Dpp gradient activity.