994 resultados para Linear potential
Resumo:
Sequential estimation of the success probability $p$ in inverse binomial sampling is considered in this paper. For any estimator $\hatvap$, its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters $a$ and $b$ for $\hatvap < p$ and $\hatvap > p$ respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as $p \rightarrow 0$, and which guarantee that, for any $p \in (0,1)$, the risk is lower than its asymptotic value. This allows selecting the required number of successes, $\nnum$, to meet a prescribed quality irrespective of the unknown $p$. In addition, the proposed estimators are shown to be approximately minimax when $a/b$ does not deviate too much from $1$, and asymptotically minimax as $\nnum \rightarrow \infty$ when $a=b$.
Resumo:
The study of the performance of an innovative receiver for linear Fresnel reflectors is carried out in this paper, and the results are analyzed with a physics perspective of the process. The receiver consists of a bundle of tubes parallel to the mirror arrays, resulting on a smaller cross section for the same receiver width as the number of tubes increases, due to the diminution of their diameter. This implies higher heat carrier fluid speeds, and thus, a more effective heat transfer process, although it conveys higher pumping power as well. Mass flow is optimized for different tubes diameters, different impinging radiation intensities and different fluid inlet temperatures. It is found that the best receiver design, namely the tubes diameter that maximizes the exergetic efficiency for given working conditions, is similar for the cases studied. There is a range of tubes diameters that imply similar efficiencies, which can drive to capital cost reduction thanks to the flexibility of design. In addition, the length of the receiver is also optimized, and it is observed that the optimal length is similar for the working conditions considered. As a result of this study, it is found that this innovative receiver provides an optimum design for the whole day, even though impinging radiation intensity varies notably. Thermal features of this type of receiver could be the base of a new generation of concentrated solar power plants with a great potential for cost reduction, because of the simplicity of the system and the lower weigh of the components, plus the flexibility of using the receiver tubes for different streams of the heat carrier fluid.
Resumo:
A general theory that describes the B.I.E. linear approximation in potential and elasticity problems, is developed. A method to tread the Dirichlet condition in sharp vertex is presented. Though the study is developed for linear elements, its extension to higher order interpolation is straightforward. A new direct assembling procedure of the global of equations to be solved, is finally showed.
Resumo:
El estudio sísmico en los últimos 50 años y el análisis del comportamiento dinámico del suelo revelan que el comportamiento del suelo es altamente no lineal e histéretico incluso para pequeñas deformaciones. El comportamiento no lineal del suelo durante un evento sísmico tiene un papel predominante en el análisis de la respuesta de sitio. Los análisis unidimensionales de la respuesta sísmica del suelo son a menudo realizados utilizando procedimientos lineales equivalentes, que requieren generalmente pocos parámetros conocidos. Los análisis de respuesta de sitio no lineal tienen el potencial para simular con mayor precisión el comportamiento del suelo, pero su aplicación en la práctica se ha visto limitada debido a la selección de parámetros poco documentadas y poco claras, así como una inadecuada documentación de los beneficios del modelado no lineal en relación al modelado lineal equivalente. En el análisis del suelo, el comportamiento del suelo es aproximado como un sólido Kelvin-Voigt con un módulo de corte elástico y amortiguamiento viscoso. En el análisis lineal y no lineal del suelo se están considerando geometrías y modelos reológicos más complejos. El primero está siendo dirigido por considerar parametrizaciones más ricas del comportamiento linealizado y el segundo mediante el uso de multi-modo de los elementos de resorte-amortiguador con un eventual amortiguador fraccional. El uso del cálculo fraccional está motivado en gran parte por el hecho de que se requieren menos parámetros para lograr la aproximación exacta a los datos experimentales. Basándose en el modelo de Kelvin-Voigt, la viscoelasticidad es revisada desde su formulación más estándar a algunas descripciones más avanzada que implica la amortiguación dependiente de la frecuencia (o viscosidad), analizando los efectos de considerar derivados fraccionarios para representar esas contribuciones viscosas. Vamos a demostrar que tal elección se traduce en modelos más ricos que pueden adaptarse a diferentes limitaciones relacionadas con la potencia disipada, amplitud de la respuesta y el ángulo de fase. Por otra parte, el uso de derivados fraccionarios permite acomodar en paralelo, dentro de un análogo de Kelvin-Voigt generalizado, muchos amortiguadores que contribuyen a aumentar la flexibilidad del modelado para la descripción de los resultados experimentales. Obviamente estos modelos ricos implican muchos parámetros, los asociados con el comportamiento y los relacionados con los derivados fraccionarios. El análisis paramétrico de estos modelos requiere técnicas numéricas eficientemente capaces de simular comportamientos complejos. El método de la Descomposición Propia Generalizada (PGD) es el candidato perfecto para la construcción de este tipo de soluciones paramétricas. Podemos calcular off-line la solución paramétrica para el depósito de suelo, para todos los parámetros del modelo, tan pronto como tales soluciones paramétricas están disponibles, el problema puede ser resuelto en tiempo real, porque no se necesita ningún nuevo cálculo, el solucionador sólo necesita particularizar on-line la solución paramétrica calculada off-line, que aliviará significativamente el procedimiento de solución. En el marco de la PGD, parámetros de los materiales y los diferentes poderes de derivación podrían introducirse como extra-coordenadas en el procedimiento de solución. El cálculo fraccional y el nuevo método de reducción modelo llamado Descomposición Propia Generalizada han sido aplicado en esta tesis tanto al análisis lineal como al análisis no lineal de la respuesta del suelo utilizando un método lineal equivalente. ABSTRACT Studies of earthquakes over the last 50 years and the examination of dynamic soil behavior reveal that soil behavior is highly nonlinear and hysteretic even at small strains. Nonlinear behavior of soils during a seismic event has a predominant role in current site response analysis. One-dimensional seismic ground response analysis are often performed using equivalent-linear procedures, which require few, generally well-known parameters. Nonlinear analyses have the potential to more accurately simulate soil behavior, but their implementation in practice has been limited because of poorly documented and unclear parameter selection, as well as inadequate documentation of the benefits of nonlinear modeling relative to equivalent linear modeling. In soil analysis, soil behaviour is approximated as a Kelvin-Voigt solid with a elastic shear modulus and viscous damping. In linear and nonlinear analysis more complex geometries and more complex rheological models are being considered. The first is being addressed by considering richer parametrizations of the linearized behavior and the second by using multi-mode spring-dashpot elements with eventual fractional damping. The use of fractional calculus is motivated in large part by the fact that fewer parameters are required to achieve accurate approximation of experimental data. Based in Kelvin-Voigt model the viscoelastodynamics is revisited from its most standard formulation to some more advanced description involving frequency-dependent damping (or viscosity), analyzing the effects of considering fractional derivatives for representing such viscous contributions. We will prove that such a choice results in richer models that can accommodate different constraints related to the dissipated power, response amplitude and phase angle. Moreover, the use of fractional derivatives allows to accommodate in parallel, within a generalized Kelvin-Voigt analog, many dashpots that contribute to increase the modeling flexibility for describing experimental findings. Obviously these rich models involve many parameters, the ones associated with the behavior and the ones related to the fractional derivatives. The parametric analysis of all these models require efficient numerical techniques able to simulate complex behaviors. The Proper Generalized Decomposition (PGD) is the perfect candidate for producing such kind of parametric solutions. We can compute off-line the parametric solution for the soil deposit, for all parameter of the model, as soon as such parametric solutions are available, the problem can be solved in real time because no new calculation is needed, the solver only needs particularize on-line the parametric solution calculated off-line, which will alleviate significantly the solution procedure. Within the PGD framework material parameters and the different derivation powers could be introduced as extra-coordinates in the solution procedure. Fractional calculus and the new model reduction method called Proper Generalized Decomposition has been applied in this thesis to the linear analysis and nonlinear soil response analysis using a equivalent linear method.
Resumo:
El Niño phenomenon is the leading mode of sea surface temperature interannual variability. It can affect weather patterns worldwide and therefore crop production. Crop models are useful tools for impact and predictability applications, allowing to obtain long time series of potential and attainable crop yield, unlike to available time series of observed crop yield for many countries. Using this tool, crop yield variability in a location of Iberia Peninsula (IP) has been previously studied, finding predictability from Pacific El Niño conditions. Nevertheless, the work has not been done for an extended area. The present work carries out an analysis of maize yield variability in IP for the whole twenty century, using a calibrated crop model at five contrasting Spanish locations and reanalyses climate datasets to obtain long time series of potential yield. The study tests the use of reanalysis data for obtaining only climate dependent time series of crop yield for the whole region, and to use these yield to analyze the influences of oceanic and atmospheric patterns. The results show a good reliability of reanalysis data. The spatial distribution of the leading principal component of yield variability shows a similar behaviour over all the studied locations in the IP. The strong linear correlation between El Niño index and yield is remarkable, being this relation non-stationary on time, although the air temperature-yield relationship remains on time, being the highest influences during grain filling period. Regarding atmospheric patterns, the summer Scandinavian pattern has significant influence on yield in IP. The potential usefulness of this study is to apply the relationships found to improving crop forecasting in IP.
Resumo:
Conceptual frameworks of dryland degradation commonly include ecohydrological feedbacks between landscape spatial organization and resource loss, so that decreasing cover and size of vegetation patches result in higher water and soil losses, which lead to further vegetation loss. However, the impacts of these feedbacks on dryland dynamics in response to external stress have barely been tested. Using a spatially-explicit model, we represented feedbacks between vegetation pattern and landscape resource loss by establishing a negative dependence of plant establishment on the connectivity of runoff-source areas (e.g., bare soils). We assessed the impact of various feedback strengths on the response of dryland ecosystems to changing external conditions. In general, for a given external pressure, these connectivity-mediated feedbacks decrease vegetation cover at equilibrium, which indicates a decrease in ecosystem resistance. Along a gradient of gradual increase of environmental pressure (e.g., aridity), the connectivity-mediated feedbacks decrease the amount of pressure required to cause a critical shift to a degraded state (ecosystem resilience). If environmental conditions improve, these feedbacks increase the pressure release needed to achieve the ecosystem recovery (restoration potential). The impact of these feedbacks on dryland response to external stress is markedly non-linear, which relies on the non-linear negative relationship between bare-soil connectivity and vegetation cover. Modelling studies on dryland vegetation dynamics not accounting for the connectivity-mediated feedbacks studied here may overestimate the resistance, resilience and restoration potential of drylands in response to environmental and human pressures. Our results also suggest that changes in vegetation pattern and associated hydrological connectivity may be more informative early-warning indicators of dryland degradation than changes in vegetation cover.
Resumo:
To effectively assess and mitigate risk of permafrost disturbance, disturbance-p rone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape charac- teristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Pen- insula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed lo- cations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) N 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Addition- ally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results in- dicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of dis- turbances were similar regardless of the location. Disturbances commonly occurred on slopes between 4 and 15°, below Holocene marine limit, and in areas with low potential incoming solar radiation
Resumo:
The boundary element method (BEM) was used to study galvanic corrosion using linear and logarithmic boundary conditions. The linear boundary condition was implemented by using the linear approach and the piecewise linear approach. The logarithmic boundary condition was implemented by the piecewise linear approach. The calculated potential and current density distribution were compared with the prior analytical results. For the linear boundary condition, the BEASY program using the linear approach and the piecewise linear approach gave accurate predictions of the potential and the galvanic current density distributions for varied electrolyte conditions, various film thicknesses, various electrolyte conductivities and various area ratio of anode/cathode. The 50-point piecewise linear method could be used with both linear and logarithmic polarization curves.
Resumo:
By carefully controlling the concentration of alpha,omega-thiol polystyrene in solution, we achieved formation of unique monocyclic polystyrene chains (i.e., polymer chains with only one disulfide linkage). The presence of cyclic polystyrene was confirmed by its lower than expected molecular weight due to a lower hydrodynamic volume and loss of thiol groups as detected by using Ellman's reagent. The alpha,omega-thiol polystyrene was synthesized by polymerizing styrene in the presence of a difunctional RAFT agent and subsequent conversion of the dithioester end groups to thiols via the addition of hexylamine. Oxidation gave either monocyclic polymer chains (i.e., with only one disulfide linkage) or linear multiblock polymers with many disulfide linkages depending on the concentration of polymer used with greater chance of cyclization in more dilute solutions. At high polymer concentrations, linear multiblock polymers were formed. To control the MWD of these linear multiblocks, monofunctional X-PSTY (X = PhCH2C(S)-S-) was added. It was found that the greatest ratio of X-PSTY to X-PSTY-X resulted in a low M-n and PDI. We have shown that we can control both the structure and MWD using this chemistry, but more importantly such disulfide linkages can be readily reduced back to the starting polystyrene with thiol end groups, which has potential use for a recyclable polymer material.
Resumo:
Since Z, being a state-based language, describes a system in terms of its state and potential state changes, it is natural to want to describe properties of a specified system also in terms of its state. One means of doing this is to use Linear Temporal Logic (LTL) in which properties about the state of a system over time can be captured. This, however, raises the question of whether these properties are preserved under refinement. Refinement is observation preserving and the state of a specified system is regarded as internal and, hence, non-observable. In this paper, we investigate this issue by addressing the following questions. Given that a Z specification A is refined by a Z specification C, and that P is a temporal logic property which holds for A, what temporal logic property Q can we deduce holds for C? Furthermore, under what circumstances does the property Q preserve the intended meaning of the property P? The paper answers these questions for LTL, but the approach could also be applied to other temporal logics over states such as CTL and the mgr-calculus.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
The new technology of combinational chemistry has been introduced to pharmaceutical companies, improving and making more efficient the process of drug discovery. Automated combinatorial chemistry in the solution-phase has been used to prepare a large number of compounds of anti-cancer screening. A library of caffeic acid derivatives has been prepared by the Knoevenagel condensation of aldehyde and active methylene reagents. These products have been screened against two murine adenocarcinoma cell lines (MAC) which are generally refractive to standard cytotoxic agents. The target of anti-proliferative action was the 12- and 15-lipoxygenase enzymes upon which these tumour cell lines have been shown to be dependent for proliferation and metastasis. Compounds were compared to a standard lipoxygenase inhibitor and if found to be active anti-proliferative agents were tested for their general cytotoxicity and lipoxygenase inhibition. A solid-phase bound catalyst, piperazinomethyl polystyrene, was devised and prepared for the improved generation of Knoevenagel condensation products. This piperazinomethyl polystyrene was compared to the traditional liquid catalyst, piperidine, and was found to reduce the amount of by-products formed during reaction and had the advantage of easy removal from the reaction. 13C NMR has been used to determine the E/Z stereochemistry of Knoevenagel condensation products. Soluble polymers have been prepared containing different building blocks pendant to the polymer backbone. Aldehyde building blocks incorporated into the polymer structure have been subjected to the Knoevenagel condensation. Cleavage of the resultant pendant molecules has proved that soluble linear polymers have the potential to generate combinatorial mixtures of known composition for biological testing. Novel catechol derivatives have been prepared by traditional solution-phase chemistry with the intention of transferring their synthesis to a solid-phase support. Catechol derivatives prepared were found to be active inhibitors of lipoxygenase. Soluble linear supports for the preparation of these active compounds were designed and tested. The aim was to develop a support suitable for the automated synthesis of libraries of catechol derivatives for biological screening.
Resumo:
Objective: Development and validation of a selective and sensitive LCMS method for the determination of methotrexate polyglutamates in dried blood spots (DBS). Methods: DBS samples [spiked or patient samples] were prepared by applying blood to Guthrie cards which was then dried at room temperature. The method utilised 6-mm disks punched from the DBS samples (equivalent to approximately 12 μl of whole blood). The simple treatment procedure was based on protein precipitation using perchloric acid followed by solid phase extraction using MAX cartridges. The extracted sample was chromatographed using a reversed phase system involving an Atlantis T3-C18 column (3 μm, 2.1x150 mm) preceded by Atlantis guard column of matching chemistry. Analytes were subjected to LCMS analysis using positive electrospray ionization. Key Results: The method was linear over the range 5-400 nmol/L. The limits of detection and quantification were 1.6 and 5 nmol/L for individual polyglutamates and 1.5 and 4.5 nmol/L for total polyglutamates, respectively. The method has been applied successfully to the determination of DBS finger-prick samples from 47 paediatric patients and results confirmed with concentrations measured in matched RBC samples using conventional HPLC-UV technique. Conclusions and Clinical Relevance: The methodology has a potential for application in a range of clinical studies (e.g. pharmacokinetic evaluations or medication adherence assessment) since it is minimally invasive and easy to perform, potentially allowing parents to take blood samples at home. The feasibility of using DBS sampling can be of major value for future clinical trials or clinical care in paediatric rheumatology. © 2014 Hawwa et al.
Resumo:
Tuberculosis (TB), an infection caused by human pathogen Mycobacterium tuberculosis, continues to kill millions each year and is as prevalent as it was in the pre-antimicrobial era. With the emergence of continuously-evolving multi-drug resistant strains (MDR) and the implications of the HIV epidemic, it is crucial that new drugs with better efficacy and affordable cost are developed to treat TB. With this in mind, the first part of this thesis discusses the synthesis of libraries of derivatives of pyridine carboxamidrazones, along with cyclised (1,2,4-triazole and 1,2,4-oxadiazole) and fluorinated analogues. Microbiological screening against M. tuberculosis was carried out at the TAACF, NIAID and IDRI (USA). This confirmed the earlier findings that 2-pyridyl-substituted carboxamidrazones were more active than the 4-pyridyl-substituted carboxamidrazones. Another important observation was that upon cyclisation of these carboxamidrazones, a small number of the triazoles retained their activity while in most of the remaining compounds the activity was diminished. This might be attributed to the significant increase in logP value caused by cyclisation of these linear carboxamidrazones, resulting in high lipophilicity and decreased permeability. Another reason might be that the rigidity conferred upon the compound due to cyclisation, results in failure of the compound to fit into the active site of the putative target enzyme. In order to investigate the potential change to the compounds’ metabolism in the organism and/or host, the most active compounds were selected and a fluorine atom was introduced in the pyridine ring. The microbiological results shows a drastic improvement in the activity of the fluorinated carboxamidrazone amides as compared to their non fluorinated counterpart. This improvement in the activity could possibly be the result of the increased cell permeability caused by the fluorine. In a subsidiary strand, a selection of long-chain , -unsaturated carboxylic esters, -keto, -hydroxy carboxylic esters and -keto, -hydroxy carboxylic esters, structurally similar to mycolic acids, were synthesised. The microbiological data revealed that one of the open chain compound was active against the Mycobacterium tuberculosis H37Rv strain and some resistant isolates. The possible compound activity could be its potential to disrupt mycobacterial cell wall synthesis by interfering with the FAS-II pathway.