962 resultados para advanced analysis
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
Caption title.
Resumo:
Mode of access: Internet.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
This paper describes the buckling phenomenon of a tubular truss with unsupported length through a full-scale test and presents a practical computational method for the design of the trusses allowing for the contribution of torsional stiffness against buckling, of which the effect has never been considered previously by others. The current practice for the design of a planar truss has largely been based on the linear elastic approach which cannot allow for the contribution of torsional stiffness and tension members in a structural system against buckling. The over-simplified analytical technique is unable to provide a realistic and an economical design to a structure. In this paper the stability theory is applied to the second-order analysis and design of the structural form, with detailed allowance for the instability and second-order effects in compliance with design code requirements. Finally, the paper demonstrates the application of the proposed method to the stability design of a commonly adopted truss system used in support of glass panels in which lateral bracing members are highly undesirable for economical and aesthetic reasons.
Resumo:
ABSTRACT (FRENCH)Ce travail de thèse basé sur le système visuel chez les sujets sains et chez les patients schizophrènes, s'articule autour de trois articles scientifiques publiés ou en cours de publication. Ces articles traitent des sujets suivants : le premier article présente une nouvelle méthode de traitement des composantes physiques des stimuli (luminance et fréquence spatiale). Le second article montre, à l'aide d'analyses de données EEG, un déficit de la voie magnocellulaire dans le traitement visuel des illusions chez les patients schizophrènes. Ceci est démontré par l'absence de modulation de la composante PI chez les patients schizophrènes contrairement aux sujets sains. Cette absence est induite par des stimuli de type illusion Kanizsa de différentes excentricités. Finalement, le troisième article, également à l'aide de méthodes de neuroimagerie électrique (EEG), montre que le traitement des contours illusoires se trouve dans le complexe latéro-occipital (LOC), à l'aide d'illusion « misaligned gratings ». De plus il révèle que les activités démontrées précédemment dans les aires visuelles primaires sont dues à des inférences « top- down ».Afin de permettre la compréhension de ces trois articles, l'introduction de ce manuscrit présente les concepts essentiels. De plus des méthodes d'analyses de temps-fréquence sont présentées. L'introduction est divisée en quatre parties : la première présente le système visuel depuis les cellules retino-corticales aux deux voix du traitement de l'information en passant par les régions composant le système visuel. La deuxième partie présente la schizophrénie par son diagnostic, ces déficits de bas niveau de traitement des stimuli visuel et ces déficits cognitifs. La troisième partie présente le traitement des contours illusoires et les trois modèles utilisés dans le dernier article. Finalement, les méthodes de traitement des données EEG seront explicitées, y compris les méthodes de temps-fréquences.Les résultats des trois articles sont présentés dans le chapitre éponyme (du même nom). De plus ce chapitre comprendra les résultats obtenus à l'aide des méthodes de temps-fréquenceFinalement, la discussion sera orientée selon trois axes : les méthodes de temps-fréquence ainsi qu'une proposition de traitement de ces données par une méthode statistique indépendante de la référence. La discussion du premier article en montrera la qualité du traitement de ces stimuli. La discussion des deux articles neurophysiologiques, proposera de nouvelles d'expériences afin d'affiner les résultats actuels sur les déficits des schizophrènes. Ceci pourrait permettre d'établir un marqueur biologique fiable de la schizophrénie.ABSTRACT (ENGLISH)This thesis focuses on the visual system in healthy subjects and schizophrenic patients. To address this research, advanced methods of analysis of electroencephalographic (EEG) data were used and developed. This manuscript is comprised of three scientific articles. The first article showed a novel method to control the physical features of visual stimuli (luminance and spatial frequencies). The second article showed, using electrical neuroimaging of EEG, a deficit in spatial processing associated with the dorsal pathway in chronic schizophrenic patients. This deficit was elicited by an absent modulation of the PI component in terms of response strength and topography as well as source estimations. This deficit was orthogonal to the preserved ability to process Kanizsa-type illusory contours. Finally, the third article resolved ongoing debates concerning the neural mechanism mediating illusory contour sensitivity by using electrical neuroimaging to show that the first differentiation of illusory contour presence vs. absence is localized within the lateral occipital complex. This effect was subsequent to modulations due to the orientation of misaligned grating stimuli. Collectively, these results support a model where effects in V1/V2 are mediated by "top-down" modulation from the LOC.To understand these three articles, the Introduction of this thesis presents the major concepts used in these articles. Additionally, a section is devoted to time-frequency analysis methods not presented in the articles themselves. The introduction is divided in four parts. The first part presents three aspects of the visual system: cellular, regional, and its functional interactions. The second part presents an overview of schizophrenia and its sensoiy-cognitive deficits. The third part presents an overview of illusory contour processing and the three models examined in the third article. Finally, advanced analysis methods for EEG are presented, including time- frequency methodology.The Introduction is followed by a synopsis of the main results in the articles as well as those obtained from the time-frequency analyses.Finally, the Discussion chapter is divided along three axes. The first axis discusses the time frequency analysis and proposes a novel statistical approach that is independent of the reference. The second axis contextualizes the first article and discusses the quality of the stimulus control and direction for further improvements. Finally, both neurophysiologic articles are contextualized by proposing future experiments and hypotheses that may serve to improve our understanding of schizophrenia on the one hand and visual functions more generally.
Resumo:
Endocannabinoids and cannabinoid 1 (CB(1)) receptors have been implicated in cardiac dysfunction, inflammation, and cell death associated with various forms of shock, heart failure, and atherosclerosis, in addition to their recognized role in the development of various cardiovascular risk factors in obesity/metabolic syndrome and diabetes. In this study, we explored the role of CB(1) receptors in myocardial dysfunction, inflammation, oxidative/nitrative stress, cell death, and interrelated signaling pathways, using a mouse model of type 1 diabetic cardiomyopathy. Diabetic cardiomyopathy was characterized by increased myocardial endocannabinoid anandamide levels, oxidative/nitrative stress, activation of p38/Jun NH(2)-terminal kinase (JNK) mitogen-activated protein kinases (MAPKs), enhanced inflammation (tumor necrosis factor-α, interleukin-1β, cyclooxygenase 2, intracellular adhesion molecule 1, and vascular cell adhesion molecule 1), increased expression of CB(1), advanced glycation end product (AGE) and angiotensin II type 1 receptors (receptor for advanced glycation end product [RAGE], angiotensin II receptor type 1 [AT(1)R]), p47(phox) NADPH oxidase subunit, β-myosin heavy chain isozyme switch, accumulation of AGE, fibrosis, and decreased expression of sarcoplasmic/endoplasmic reticulum Ca(2+)-ATPase (SERCA2a). Pharmacological inhibition or genetic deletion of CB(1) receptors attenuated the diabetes-induced cardiac dysfunction and the above-mentioned pathological alterations. Activation of CB(1) receptors by endocannabinoids may play an important role in the pathogenesis of diabetic cardiomyopathy by facilitating MAPK activation, AT(1)R expression/signaling, AGE accumulation, oxidative/nitrative stress, inflammation, and fibrosis. Conversely, CB(1) receptor inhibition may be beneficial in the treatment of diabetic cardiovascular complications.
Resumo:
The use of geographic information systems (GIS), combined with advanced analysis technique, enables the standardization and data integration, which are usually from different sources, allowing you to conduct a joint evaluation of the same, providing more efficiency and reliability in the decision-making process to promote the adequacy of land use. This study aimed to analyze the priority areas of the basin agricultural use of the Capivara River, Botucatu, SP, through multicriterial analysis, aiming at conservation of water resources. The results showed that the Geographic Information System Idrisi Selva combined with advanced analysis technique and the weighted linear combination method proved to be an effective tool in the combination of different criteria, allowing the determination of the adequacy of agricultural land use less subjective way. Environmental criteria were shown to be suitable for the combination and multi-criteria analysis, allowing the preparation of the statement of suitability classes for agricultural use and can be useful for regional planning and decision-making by public bodies and environmental agents because the method takes into account the rational use of land and allowing the conservation of hydrics resources.
Resumo:
The present Master/Doctorate in Nuclear Science and Technology programme implemented in the Department of Nuclear Engineering of the Universidad Politécnica de Madrid (NED-UPM) has the excellence qualification by the Spanish Ministry of Education. One of the main of this programme is the training for the development of methodologies of simulation, design and advanced analysis, including experimental tools, necessary in research and in professional work in the nuclear field.
Resumo:
The Helix Research Institute (HRI) in Japan is releasing 4356 HUman Novel Transcripts and related information in the newly established HUNT database. The institute is a joint research project principally funded by the Japanese Ministry of International Trade and Industry, and the clones were sequenced in the governmental New Energy and Industrial Technology Development Organization (NEDO) Human cDNA Sequencing Project. The HUNT database contains an extensive amount of annotation from advanced analysis and represents an essential bioinformatics contribution towards understanding of the gene function. The HRI human cDNA clones were obtained from full-length enriched cDNA libraries constructed with the oligo-capping method and have resulted in novel full-length cDNA sequences. A large fraction has little similarity to any proteins of known function and to obtain clues about possible function we have developed original analysis procedures. Any putative function deduced here can be validated or refuted by complementary analysis results. The user can also extract information from specific categories like PROSITE patterns, PFAM domains, PSORT localization, transmembrane helices and clones with GENIUS structure assignments. The HUNT database can be accessed at http://www.hri.co.jp/HUNT.
Portrait des difficultés des élèves du secondaire relativement à l’orthographe des formes homophones
Resumo:
Cette recherche descriptive vise à établir un portrait des principales difficultés rencontrées par les élèves du secondaire relativement à l’orthographe des homophones et cela à travers différents angles d’analyse. Nous avons d’abord fait ressortir l’importance des difficultés orthographiques chez les élèves du secondaire québécois et mis en relief la proportion de ces erreurs attribuée à l’orthographe des homophones. À partir des données recueillies par le groupe de recherche Projet grammaire-écriture qui s’est donné comme objectif, dans un premier temps, de recueillir de nombreuses données à travers deux instruments de collecte (une dictée et une production écrite), nous avons tout d’abord relevé les erreurs d’homophonie commises le plus fréquemment par les élèves pour ensuite analyser chacune des formes homophones problématiques en fonction de critères variés tels que leur fréquence lexicale dans la langue française, leur appartenance à une catégorie grammaticale particulière ou encore la structure syntaxique qui les sous-tend. Les erreurs les plus importantes ont fait l’objet d’une observation plus poussée : nous avons établi le pourcentage de graphies correctes versus erronées dans tous les textes des élèves. Finalement, nous avons aussi comparé nos résultats à ceux obtenus par McNicoll et Roy (1984) auprès d’une population de niveau primaire. Les résultats révélés par notre analyse montrent que ce sont principalement les finales verbales en /E/ qui posent problème aux élèves du secondaire, suivies par les formes homophones s’est/c’est/ces/ses et se/ce.
Portrait des difficultés des élèves du secondaire relativement à l’orthographe des formes homophones
Resumo:
Cette recherche descriptive vise à établir un portrait des principales difficultés rencontrées par les élèves du secondaire relativement à l’orthographe des homophones et cela à travers différents angles d’analyse. Nous avons d’abord fait ressortir l’importance des difficultés orthographiques chez les élèves du secondaire québécois et mis en relief la proportion de ces erreurs attribuée à l’orthographe des homophones. À partir des données recueillies par le groupe de recherche Projet grammaire-écriture qui s’est donné comme objectif, dans un premier temps, de recueillir de nombreuses données à travers deux instruments de collecte (une dictée et une production écrite), nous avons tout d’abord relevé les erreurs d’homophonie commises le plus fréquemment par les élèves pour ensuite analyser chacune des formes homophones problématiques en fonction de critères variés tels que leur fréquence lexicale dans la langue française, leur appartenance à une catégorie grammaticale particulière ou encore la structure syntaxique qui les sous-tend. Les erreurs les plus importantes ont fait l’objet d’une observation plus poussée : nous avons établi le pourcentage de graphies correctes versus erronées dans tous les textes des élèves. Finalement, nous avons aussi comparé nos résultats à ceux obtenus par McNicoll et Roy (1984) auprès d’une population de niveau primaire. Les résultats révélés par notre analyse montrent que ce sont principalement les finales verbales en /E/ qui posent problème aux élèves du secondaire, suivies par les formes homophones s’est/c’est/ces/ses et se/ce.
Resumo:
PURPOSE: To perform advanced analysis of the corneal deformation response to air pressure in keratoconics compared with age- and sex-matched controls. METHODS: The ocular response analyzer was used to measure the air pressure-corneal deformation relationship of 37 patients with keratoconus and 37 age (mean 36 ± 10 years)- and sex-matched controls with healthy corneas. Four repeat air pressure-corneal deformation profiles were averaged, and 42 separate parameters relating to each element of the profiles were extracted. Corneal topography and pachymetry were performed with the Orbscan II. The severity of the keratoconus was graded based on a single metric derived from anterior corneal curvatures, difference in astigmatism in each meridian, anterior best-fit sphere, and posterior best-fit sphere. RESULTS: Most of the biomechanical characteristics of keratoconic eyes were significantly different from normal eyes (P <0.001), especially during the initial corneal applanation. With increasing keratoconus severity, the cornea was thinner (r = -0.407, P <0.001), the speed of corneal concave deformation past applanation was quicker (dive; r = -0.314, P = 0.01), and the tear film index was lower (r = -0.319, P = 0.01). The variance in keratoconus severity could be accounted for by the corneal curvature and central corneal thickness (r = 0.80) with biomechanical characteristics contributing an additional 4% (total r = 0.84). The area under the receiver operating characteristic curve was 0.919 ± 0.025 for keratometry alone, 0.965 ± 0.014 with the addition of pachymetry, and 0.972 ± 0.012 combined with ocular response analyzer biomechanical parameters. CONCLUSIONS: Characteristics of the air pressure-corneal deformation profile are more affected by keratoconus than the traditionally extracted corneal hysteresis and corneal resistance factors. These biomechanical metrics slightly improved the detection and severity prediction of keratoconus above traditional keratometric and pachymetric assessment of corneal shape.
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.