986 resultados para semi empirical calculations
Resumo:
This thesis comprises two main objectives. The first objective involved the stereochemical studies of chiral 4,6-diamino-1-aryl-1,2-dihydro-s-triazines and an investigation on how the different conformations of these stereoisomers may affect their binding affinity to the enzyme dihydrofolate reductase (DHFR). The ortho-substituted 1-aryl-1,2-dihydro-s-triazines were synthesised by the three component method. An ortho-substitution at the C6' position was observed when meta-azidocycloguanil was decomposed in acid. The ortho-substituent restricts free rotation and this gives rise to atropisomerism. Ortho-substituted 4,6-diamino-1-aryl-2-ethyl-1,2-dihydro-2-methyl-s-triazine contains two elements of chirality and therefore exists as four stereoisomers: (S,aR), (R,aS), (R,aR) and (S,aS). The energy barriers to rotation of these compounds were calculated by a semi-empirical molecular orbital program called MOPAC and they were found to be in excess of 23 kcal/mol. The diastereoisomers were resolved and enriched by C18 reversed phase h.p.l.c. Nuclear overhauser effect experiments revealed that (S,aR) and (R,aS) were the more stable pair of stereoisomers and therefore existed as the major component. The minor diastereoisomers showed greater binding affinity for the rat liver DHFR in in vitro assay. The second objective entailed the investigation into the possibility of retaining DHFR inhibitory activity by replacing the classical diamino heterocyclic moiety with an amidinyl group. 4-Benzylamino-3-nitro-N,N-dimethyl-phenylamidine was synthesised in two steps. One of the two phenylamidines indicated weak inhibition against the rat liver DHFR. This weak activity may be due to the failure of the inhibitor molecule to form strong hydrogen bonds with residue Glu-30 at the active site of the enzyme.
Resumo:
This paper presents the main achievements of the author’s PhD dissertation. The work is dedicated to mathematical and semi-empirical approaches applied to the case of Bulgarian wildland fires. After the introductory explanations, short information from every chapter is extracted to cover the main parts of the obtained results. The methods used are described in brief and main outcomes are listed. ACM Computing Classification System (1998): D.1.3, D.2.0, K.5.1.
Resumo:
Surface water flow patterns in wetlands play a role in shaping substrates, biogeochemical cycling, and ecosystem characteristics. This paper focuses on the factors controlling flow across a large, shallow gradient subtropical wetland (Shark River Slough in Everglades National Park, USA), which displays vegetative patterning indicative of overland flow. Between July 2003 and December 2007, flow speeds at five sites were very low (s−1), and exhibited seasonal fluctuations that were correlated with seasonal changes in water depth but also showed distinctive deviations. Stepwise linear regression showed that upstream gate discharges, local stage gradients, and stage together explained 50 to 90% of the variance in flow speed at four of the five sites and only 10% at one site located close to a levee-canal combination. Two non-linear, semi-empirical expressions relating flow speeds to the local hydraulic gradient, water depths, and vegetative resistance accounted for 70% of the variance in our measured speed. The data suggest local-scale factors such as channel morphology, vegetation density, and groundwater exchanges must be considered along with landscape position and basin-scale geomorphology when examining the interactions between flow and community characteristics in low-gradient wetlands such as the Everglades.
Resumo:
This study examines the performance of series of two geomagnetic indices and series synthesized from a semi-empirical model of magnetospheric currents, in explaining the geomagnetic activity observed at Northern Hemipshere's mid-latitude ground-based stations. We analyse data, for the 2007 to 2014 period, from four magnetic observatories (Coimbra, Portugal; Panagyurishte, Bulgary; Novosibirsk, Russia and Boulder, USA), at geomagnetic latitudes between 40° and 50° N. The quiet daily (QD) variation is firstly removed from the time series of the geomagnetic horizontal component (H) using natural orthogonal components (NOC) tools. We compare the resulting series with series of storm-time disturbance (Dst) and ring current (RC) indices and with H series synthesized from the Tsyganenko and Sitnov (2005, doi:10.1029/2004JA010798) (TS05) semi-empirical model of storm-time geomagnetic field. In the analysis, we separate days with low and high local K-index values. Our results show that NOC models are as efficient as standard models of QD variation in preparing raw data to be compared with proxies, but with much less complexity. For the two stations in Europe, we obtain indication that NOC models could be able to separate ionospheric and magnetospheric contributions. Dst and RC series explain the four observatory H-series successfully, with values for the mean of significant correlation coefficients, from 0.5 to 0.6 during low geomagnetic activity (K less than 4) and from 0.6 to 0.7 for geomagnetic active days (K greater than or equal to 4). With regard to the performance of TS05, our results show that the four observatories separate into two groups: Coimbra and Panagyurishte, in one group, for which the magnetospheric/ionospheric ratio in QD variation is smaller, a dominantly QD ionospheric contribution can be removed and TS05 simulations are the best proxy; Boulder and Novosibirsk,in the other group, for which the ionospheric and magnetospheric contributions in QD variation can not be differentiated and correlations with TS05 series can not be made to improve. The main contributor to magnetospheric QD signal are Birkeland currents. The relatively good success of TS05 model in explaining ground-based irregular geomagnetic activity at mid-latitudes makes it an effective tool to classify storms according to their main sources. For Coimbra and Panagyurishte in particular, where ionospheric and magnetospheric daily contributions seem easier to separate, we can aspire to use the TS05 model for ensemble generation in space weather (SW) forecasting and interpretation of past SW events.
Resumo:
The piles are one of the most important types of solution adopted for the foundation of buildings. They are responsible for transmitting to the soil in deepe r and resistant layers loads from structures. The interaction of the foundation element with the soil is a very important variable, making indispensable your domain in order to determine the strength of the assembly and establish design criteria for each c ase of application of the pile. In this research analyzes were performed f rom experiments load tests for precast concrete piles and inve stigations of soil of type SPT, a study was performed for obtaining the ultimate load capacity of the foundation through methods extrapolation of load - settlement curve , semi - empirical and theoretic . After that, were realized comparisons between the different methods used for two types of soil a granular behavior and other cohesive. For obtaining soil paramet ers to be used i n the methods were established empirical correlations with the standard penetration number (NSPT). The charge - settlement curves of the piles are also analyzed. In the face of established comparisons was indicated the most reliable semiempirical method Déco urt - Quaresma as the most reliable for estimating the tensile strength for granular and cohesive soils. Meanwhile, among the methods studied extrapolation is recommended method of Van der Veen as the most appropriate for predicting the tensile strength.
Resumo:
Trace gases are important to our environment even though their presence comes only by ‘traces’, but their concentrations must be monitored, so any necessary interventions can be done at the right time. There are some lower and upper boundaries which produce nice conditions for our lives and then monitoring trace gases comes as an essential task nowadays to be accomplished by many techniques. One of them is the differential optical absorption spectroscopy (DOAS), which consists mathematically on a regression - the classical method uses least-squares - to retrieve the trace gases concentrations. In order to achieve better results, many works have tried out different techniques instead of the classical approach. Some have tried to preprocess the signals to be analyzed by a denoising procedure - e.g. discrete wavelet transform (DWT). This work presents a semi-empirical study to find out the most suitable DWT family to be used in this denoising. The search seeks among many well-known families the one to better remove the noise, keeping the original signal’s main features, then by decreasing the noise, the residual left after the regression is done decreases too. The analysis take account the wavelet decomposition level, the threshold to be applied on the detail coefficients and how to apply them - hard or soft thresholding. The signals used come from an open and online data base which contains characteristic signals from some trace gases usually studied.
Resumo:
Trace gases are important to our environment even though their presence comes only by ‘traces’, but their concentrations must be monitored, so any necessary interventions can be done at the right time. There are some lower and upper boundaries which produce nice conditions for our lives and then monitoring trace gases comes as an essential task nowadays to be accomplished by many techniques. One of them is the differential optical absorption spectroscopy (DOAS), which consists mathematically on a regression - the classical method uses least-squares - to retrieve the trace gases concentrations. In order to achieve better results, many works have tried out different techniques instead of the classical approach. Some have tried to preprocess the signals to be analyzed by a denoising procedure - e.g. discrete wavelet transform (DWT). This work presents a semi-empirical study to find out the most suitable DWT family to be used in this denoising. The search seeks among many well-known families the one to better remove the noise, keeping the original signal’s main features, then by decreasing the noise, the residual left after the regression is done decreases too. The analysis take account the wavelet decomposition level, the threshold to be applied on the detail coefficients and how to apply them - hard or soft thresholding. The signals used come from an open and online data base which contains characteristic signals from some trace gases usually studied.
On thermodynamics in the primary power conversion of oscillating water column wave energy converters
Resumo:
The paper presents an investigation to the thermodynamics of the air flow in the air chamber for the oscillating water column wave energy converters, in which the oscillating water surface in the water column pressurizes or de-pressurises the air in the chamber. To study the thermodynamics and the compressibility of the air in the chamber, a method is developed in this research: the power take-off is replaced with an accepted semi-empirical relationship between the air flow rate and the oscillating water column chamber pressure, and the thermodynamic process is simplified as an isentropic process. This facilitates the use of a direct expression for the work done on the power take-off by the flowing air and the generation of a single differential equation that defines the thermodynamic process occurring inside the air chamber. Solving the differential equation, the chamber pressure can be obtained if the interior water surface motion is known or the chamber volume (thus the interior water surface motion) if the chamber pressure is known. As a result, the effects of the air compressibility can be studied. Examples given in the paper have shown the compressibility, and its effects on the power losses for large oscillating water column devices.
Resumo:
We present a new approach to understand the landscape of supernova explosion energies, ejected nickel masses, and neutron star birth masses. In contrast to other recent parametric approaches, our model predicts the properties of neutrino-driven explosions based on the pre-collapse stellar structure without the need for hydrodynamic simulations. The model is based on physically motivated scaling laws and simple differential equations describing the shock propagation, the contraction of the neutron star, the neutrino emission, the heating conditions, and the explosion energetics. Using model parameters compatible with multi-D simulations and a fine grid of thousands of supernova progenitors, we obtain a variegated landscape of neutron star and black hole formation similar to other parametrized approaches and find good agreement with semi-empirical measures for the ‘explodability’ of massive stars. Our predicted explosion properties largely conform to observed correlations between the nickel mass and explosion energy. Accounting for the coexistence of outflows and downflows during the explosion phase, we naturally obtain a positive correlation between explosion energy and ejecta mass. These correlations are relatively robust against parameter variations, but our results suggest that there is considerable leeway in parametric models to widen or narrow the mass ranges for black hole and neutron star formation and to scale explosion energies up or down. Our model is currently limited to an all-or-nothing treatment of fallback and there remain some minor discrepancies between model predictions and observational constraints.
Resumo:
Since core-collapse supernova simulations still struggle to produce robust neutrino-driven explosions in 3D, it has been proposed that asphericities caused by convection in the progenitor might facilitate shock revival by boosting the activity of non-radial hydrodynamic instabilities in the post-shock region. We investigate this scenario in depth using 42 relativistic 2D simulations with multigroup neutrino transport to examine the effects of velocity and density perturbations in the progenitor for different perturbation geometries that obey fundamental physical constraints (like the anelastic condition). As a framework for analysing our results, we introduce semi-empirical scaling laws relating neutrino heating, average turbulent velocities in the gain region, and the shock deformation in the saturation limit of non-radial instabilities. The squared turbulent Mach number, 〈Ma2〉, reflects the violence of aspherical motions in the gain layer, and explosive runaway occurs for 〈Ma2〉 ≳ 0.3, corresponding to a reduction of the critical neutrino luminosity by ∼25∼25 per cent compared to 1D. In the light of this theory, progenitor asphericities aid shock revival mainly by creating anisotropic mass flux on to the shock: differential infall efficiently converts velocity perturbations in the progenitor into density perturbations δρ/ρ at the shock of the order of the initial convective Mach number Maprog. The anisotropic mass flux and ram pressure deform the shock and thereby amplify post-shock turbulence. Large-scale (ℓ = 2, ℓ = 1) modes prove most conducive to shock revival, whereas small-scale perturbations require unrealistically high convective Mach numbers. Initial density perturbations in the progenitor are only of the order of Ma2progMaprog2 and therefore play a subdominant role.
Resumo:
Este estudo incide sobre as características que a presença do ião flúor em moléculas concede. Mais concretamente em fluoroquinolonas, antibióticos que cada vez são mais utilizados. Fez-se uma analise de vários parâmetros para obtermos informação sobre a interação fármaco-receptor nas fluoroquinolonas. Sendo para isso utilizadas técnicas de caracterização química computacional para conseguirmos caracterizar eletronicamente e estruturalmente (3D) as fluoroquinolonas em complemento aos métodos semi-empíricos utilizados inicialmente. Como é sabido, a especificidade e a afinidade para o sitio alvo, é essencial para eficácia de um fármaco. As fluoroquinolonas sofreram um grande desenvolvimento desde a primeira quinolona sintetizada em 1958, sendo que desde ai foram sintetizadas inúmeros derivados da mesma. Este facto deve-se a serem facilmente manipuladas, derivando fármacos altamente potentes, espectro alargado, factores farmacocinéticos optimizados e efeitos adversos reduzidos. A grande alteração farmacológica para o aumento do interesse neste grupo, foi a substituição em C6 de um átomo de flúor em vez de um de hidrogénio. Para obtermos as informações sobre a influência do ião flúor sobre as propriedades estruturais e electrónicas das fluoroquinolonas, foi feita uma comparação entre a fluoroquinolona com flúor em C6 e com hidrogénio em C6. As quatro fluoroquinolonas presentes neste estudo foram: ciprofloxacina, moxiflocacina, sparfloxacina e pefloxacina. As informações foram obtidas por programas informáticos de mecânica quântica e molecular. Concluiu-se que a presença de substituinte flúor não modificava de forma significativa a geometria das moléculas mas sim a distribuição da carga no carbono vicinal e nos átomos em posição alfa, beta e gama relativamente a este. Esta modificação da distribuição electrónica pode condicionar a ligação do fármaco ao receptor, modificando a sua actividade farmacológica.
Resumo:
Numerous studies of the dual-mode scramjet isolator, a critical component in preventing inlet unstart and/or vehicle loss by containing a collection of flow disturbances called a shock train, have been performed since the dual-mode propulsion cycle was introduced in the 1960s. Low momentum corner flow and other three-dimensional effects inherent to rectangular isolators have, however, been largely ignored in experimental studies of the boundary layer separation driven isolator shock train dynamics. Furthermore, the use of two dimensional diagnostic techniques in past works, be it single-perspective line-of-sight schlieren/shadowgraphy or single axis wall pressure measurements, have been unable to resolve the three-dimensional flow features inside the rectangular isolator. These flow characteristics need to be thoroughly understood if robust dual-mode scramjet designs are to be fielded. The work presented in this thesis is focused on experimentally analyzing shock train/boundary layer interactions from multiple perspectives in aspect ratio 1.0, 3.0, and 6.0 rectangular isolators with inflow Mach numbers ranging from 2.4 to 2.7. Secondary steady-state Computational Fluid Dynamics studies are performed to compare to the experimental results and to provide additional perspectives of the flow field. Specific issues that remain unresolved after decades of isolator shock train studies that are addressed in this work include the three-dimensional formation of the isolator shock train front, the spatial and temporal low momentum corner flow separation scales, the transient behavior of shock train/boundary layer interaction at specific coordinates along the isolator's lateral axis, and effects of the rectangular geometry on semi-empirical relations for shock train length prediction. A novel multiplane shadowgraph technique is developed to resolve the structure of the shock train along both the minor and major duct axis simultaneously. It is shown that the shock train front is of a hybrid oblique/normal nature. Initial low momentum corner flow separation spawns the formation of oblique shock planes which interact and proceed toward the center flow region, becoming more normal in the process. The hybrid structure becomes more two-dimensional as aspect ratio is increased but corner flow separation precedes center flow separation on the order of 1 duct height for all aspect ratios considered. Additional instantaneous oil flow surface visualization shows the symmetry of the three-dimensional shock train front around the lower wall centerline. Quantitative synthetic schlieren visualization shows the density gradient magnitude approximately double between the corner oblique and center flow normal structures. Fast response pressure measurements acquired near the corner region of the duct show preliminary separation in the outer regions preceding centerline separation on the order of 2 seconds. Non-intrusive Focusing Schlieren Deflectometry Velocimeter measurements reveal that both shock train oscillation frequency and velocity component decrease as measurements are taken away from centerline and towards the side-wall region, along with confirming the more two dimensional shock train front approximation for higher aspect ratios. An updated modification to Waltrup \& Billig's original semi-empirical shock train length relation for circular ducts based on centerline pressure measurements is introduced to account for rectangular isolator aspect ratio, upstream corner separation length scale, and major- and minor-axis boundary layer momentum thickness asymmetry. The latter is derived both experimentally and computationally and it is shown that the major-axis (side-wall) boundary layer has lower momentum thickness compared to the minor-axis (nozzle bounded) boundary layer, making it more separable. Furthermore, it is shown that the updated correlation drastically improves shock train length prediction capabilities in higher aspect ratio isolators. This thesis suggests that performance analysis of rectangular confined supersonic flow fields can no longer be based on observations and measurements obtained along a single axis alone. Knowledge gained by the work performed in this study will allow for the development of more robust shock train leading edge detection techniques and isolator designs which can greatly mitigate the risk of inlet unstart and/or vehicle loss in flight.
Resumo:
Soft robots are robots made mostly or completely of soft, deformable, or compliant materials. As humanoid robotic technology takes on a wider range of applications, it has become apparent that they could replace humans in dangerous environments. Current attempts to create robotic hands for these environments are very difficult and costly to manufacture. Therefore, a robotic hand made with simplistic architecture and cheap fabrication techniques is needed. The goal of this thesis is to detail the design, fabrication, modeling, and testing of the SUR Hand. The SUR Hand is a soft, underactuated robotic hand designed to be cheaper and easier to manufacture than conventional hands. Yet, it maintains much of their dexterity and precision. This thesis will detail the design process for the soft pneumatic fingers, compliant palm, and flexible wrist. It will also discuss a semi-empirical model for finger design and the creation and validation of grasping models.
Resumo:
O presente trabalho, no âmbito de projeto final de curso de metrado em Engenharia da Construção, teve como objetivo o estudo do comportamento de estruturas de suporte de terras flexíveis multi-apoiadas (com diferentes tipos de apoio) para dois tipos solos homogéneos. Recorreu-se às teorias clássicas, como a de Rankine, desenvolvidas para estruturas de suporte de terras rígidas. Às teorias semi-empíricas de Terzaghi & Peck que culminaram nos diagramas de Terzaghi & Peck. Apesar de os digramas de Terzaghi & Peck serem diagramas de pressões de terras a usar em estruturas de suporte de terras flexíveis, apresentam algumas limitações importantes, como a sua aplicação apenas em solos heterogéneos, com presença ou não de níveis freáticos, e sem fornecer distribuição das pressões de terras na zona passiva (zona enterrada). Como na atualidade os modelos de elementos finitos permitem simular de modo muito mais rigoroso os problemas da engenharia. O presente trabalho esteve focado em analisar um caso prático em diferentes solos e com diferentes tipos de apoios. Será estudado mediante os métodos analíticos usando as teorias clássicas e posteriormente métodos numéricos (com diferentes programas de cálculo). Finalmente serão comparados os resultados obtidos mediante os diferentes métodos usados. As estruturas foram inicialmente pré-dimensionadas usando os métodos clássicos. Assim foram usados os diagramas de pressões de terras de Terzaghi & Peck para a zona ativa (zona em escavação) e a teoria de Rankine para conhecer as pressões de terras na zona enterrada da cortina (parede moldada) e recorrendo ao software Ftool para a obtenção dos parâmetros de dimensionamento de estruturas de suporte de terras objeto de estudo. Posteriormente utilizaram-se os programas de cálculo automático CYPE 2015 k, e o programa de cálculo de elementos finitos PLAXIS Introductory 2010. Estes programas permitem simular o faseamento construtivo do muro. Para estudar a influência de algúns parâmetros no comportamento da Resumo IV cortina o estudo foi realizado com dois solos distintos, um solo argiloso mole e um solo arenoso denso. Assim como para dois tipos de apoios distintos, ancoragens ativas e escoras passivas. Foram analisados diferentes parâmetros na estrutura de suporte; pressões horizontais das terras, deslocamentos horizontais, esforço axial, transverso e momento fletor.
Resumo:
The dissipation of triadimefon, {1-(4-chlorophenoxy)-3,3-dimethyl-1-(1H-1,2,4-triazol-1-yl)butanone}, was studied after its application to melon leaves, glass and paper, both in greenhouse and field conditions. The dissipation rate of triadimefon in its commercial formulation Bayleton 5 was found to be lower in greenhouse than field. The results for different samples in the same conditions show that the dissipation of triadimefon was found to be biphasic. This result can be accounted by a semi-empirical model which assumes an initial fast decline of the dissipation rate, attributed to an exponential decay of the volatilization rates, followed by a second phase where the dissipation is due to a first order degradation processes.The dissipation of triadimefon, {1-(4-chlorophenoxy)-3,3-dimethyl-1-(1H- 1,2,4-triazol-1-yl)butan-one}, was studied after its application to melon leaves, glass and paper, both in greenhouse and field conditions. The dissipation rate of triadimefon in its commercial formulation Bayleton 5 was found to be lower in greenhouse than field. The results for different samples in the same conditions show that the dissipation of triadimefon was found to be biphasic. This result can be accounted by a semi-empirical model which assumes an initial fast decline of the dissipation rate, attributed to an exponential decay of the volatilization rates, followed by a second phase where the dissipation is due to a first order degradation processes.