760 resultados para Lipschitz trivial
Resumo:
Drug combinations can improve angiostatic cancer treatment efficacy and enable the reduction of side effects and drug resistance. Combining drugs is non-trivial due to the high number of possibilities. We applied a feedback system control (FSC) technique with a population-based stochastic search algorithm to navigate through the large parametric space of nine angiostatic drugs at four concentrations to identify optimal low-dose drug combinations. This implied an iterative approach of in vitro testing of endothelial cell viability and algorithm-based analysis. The optimal synergistic drug combination, containing erlotinib, BEZ-235 and RAPTA-C, was reached in a small number of iterations. Final drug combinations showed enhanced endothelial cell specificity and synergistically inhibited proliferation (p < 0.001), but not migration of endothelial cells, and forced enhanced numbers of endothelial cells to undergo apoptosis (p < 0.01). Successful translation of this drug combination was achieved in two preclinical in vivo tumor models. Tumor growth was inhibited synergistically and significantly (p < 0.05 and p < 0.01, respectively) using reduced drug doses as compared to optimal single-drug concentrations. At the applied conditions, single-drug monotherapies had no or negligible activity in these models. We suggest that FSC can be used for rapid identification of effective, reduced dose, multi-drug combinations for the treatment of cancer and other diseases.
Resumo:
Maximum entropy modeling (Maxent) is a widely used algorithm for predicting species distributions across space and time. Properly assessing the uncertainty in such predictions is non-trivial and requires validation with independent datasets. Notably, model complexity (number of model parameters) remains a major concern in relation to overfitting and, hence, transferability of Maxent models. An emerging approach is to validate the cross-temporal transferability of model predictions using paleoecological data. In this study, we assess the effect of model complexity on the performance of Maxent projections across time using two European plant species (Alnus giutinosa (L.) Gaertn. and Corylus avellana L) with an extensive late Quaternary fossil record in Spain as a study case. We fit 110 models with different levels of complexity under present time and tested model performance using AUC (area under the receiver operating characteristic curve) and AlCc (corrected Akaike Information Criterion) through the standard procedure of randomly partitioning current occurrence data. We then compared these results to an independent validation by projecting the models to mid-Holocene (6000 years before present) climatic conditions in Spain to assess their ability to predict fossil pollen presence-absence and abundance. We find that calibrating Maxent models with default settings result in the generation of overly complex models. While model performance increased with model complexity when predicting current distributions, it was higher with intermediate complexity when predicting mid-Holocene distributions. Hence, models of intermediate complexity resulted in the best trade-off to predict species distributions across time. Reliable temporal model transferability is especially relevant for forecasting species distributions under future climate change. Consequently, species-specific model tuning should be used to find the best modeling settings to control for complexity, notably with paleoecological data to independently validate model projections. For cross-temporal projections of species distributions for which paleoecological data is not available, models of intermediate complexity should be selected.
Resumo:
We derive a one dimensional formulation of the Planck-Nernst-Poisson equation to describe the dynamics of of a symmetric binary electrolyte in channels whose section is of nanometric section and varies along the axial direction. The approach is in the spirit of the Fick-Jacobs di fusion equation and leads to a system of coupled equations for the partial densities which depends on the charge sitting at the walls in a non trivial fashion. We consider two kinds of non uniformities, those due to the spatial variation of charge distribution and those due to the shape variation of the pore and report one and three-dimensional solutions of the electrokinetic equations.
Resumo:
'Què faries si l"única manera de salvar cinc persones d"una mort segura a causa d"un accident atzarós fos provocar expressament la mort d"una altra persona que no està en perill?" No és una pregunta fàcil, ni la resposta trivial. Prenem decisions constantment, sovint intranscendents-quina camisa em poso avui?-, però també moltes que tenen un cert component moral, que poden afectar altres persones.
Resumo:
The recent trend for journals to require open access to primary data included in publications has been embraced by many biologists, but has caused apprehension amongst researchers engaged in long-term ecological and evolutionary studies. A worldwide survey of 73 principal investigators (Pls) with long-term studies revealed positive attitudes towards sharing data with the agreement or involvement of the PI, and 93% of PIs have historically shared data. Only 8% were in favor of uncontrolled, open access to primary data while 63% expressed serious concern. We present here their viewpoint on an issue that can have non-trivial scientific consequences. We discuss potential costs of public data archiving and provide possible solutions to meet the needs of journals and researchers.
Resumo:
BACKGROUND: The structure and organisation of ecological interactions within an ecosystem is modified by the evolution and coevolution of the individual species it contains. Understanding how historical conditions have shaped this architecture is vital for understanding system responses to change at scales from the microbial upwards. However, in the absence of a group selection process, the collective behaviours and ecosystem functions exhibited by the whole community cannot be organised or adapted in a Darwinian sense. A long-standing open question thus persists: Are there alternative organising principles that enable us to understand and predict how the coevolution of the component species creates and maintains complex collective behaviours exhibited by the ecosystem as a whole? RESULTS: Here we answer this question by incorporating principles from connectionist learning, a previously unrelated discipline already using well-developed theories on how emergent behaviours arise in simple networks. Specifically, we show conditions where natural selection on ecological interactions is functionally equivalent to a simple type of connectionist learning, 'unsupervised learning', well-known in neural-network models of cognitive systems to produce many non-trivial collective behaviours. Accordingly, we find that a community can self-organise in a well-defined and non-trivial sense without selection at the community level; its organisation can be conditioned by past experience in the same sense as connectionist learning models habituate to stimuli. This conditioning drives the community to form a distributed ecological memory of multiple past states, causing the community to: a) converge to these states from any random initial composition; b) accurately restore historical compositions from small fragments; c) recover a state composition following disturbance; and d) to correctly classify ambiguous initial compositions according to their similarity to learned compositions. We examine how the formation of alternative stable states alters the community's response to changing environmental forcing, and we identify conditions under which the ecosystem exhibits hysteresis with potential for catastrophic regime shifts. CONCLUSIONS: This work highlights the potential of connectionist theory to expand our understanding of evo-eco dynamics and collective ecological behaviours. Within this framework we find that, despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal conditions that habituate to past environmental conditions and actively recalling those conditions. REVIEWERS: This article was reviewed by Prof. Ricard V Solé, Universitat Pompeu Fabra, Barcelona and Prof. Rob Knight, University of Colorado, Boulder.
Resumo:
The aim of this report is to classify analytical methods based on flowing media and to define (standardize) terminology. After the classification and a discussion of terms describing the systems and component parts, a section is devoted to terms describing the performance of flow systems. The list of terms included is restricted to the most relevant ones; especially "self-explanatory" terms are left out. It is emphasised that the usage of terms or expressions that do not adequately describe the processes or procedures involved should be strongly discouraged. Although belonging to the category of methods based on flowing media, chromatographic methods are not comprised in the present document. However, care has been taken that the present text is not in conflict with definitions in that domain. In documents in which flow methods are described, it should be clearly indicated how the sample and/or reagent is introduced and how the sample zone is transported. When introducing new techniques in the field, or variants of existing techniques, it is strongly recommended that descriptive terms rather than trivial or elaborate names are used.
Resumo:
Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
We present an Analytic Model of Intergalactic-medium and GAlaxy (AMIGA) evolution since the dark ages. AMIGA is in the spirit of the popular semi-analytic models of galaxy formation, although it does not use halo merger trees but interpolates halo properties in grids that are progressively built. This strategy is less memory-demanding and allows one to start modeling at sufficiently high redshifts and low halo masses to have trivial boundary conditions. The number of free parameters is minimized by making a causal connection between physical processes usually treated as independent of each other, which leads to more reliable predictions. However, the strongest points of AMIGA are the following: (1) the inclusion of molecular cooling and metal-poor, population III (Pop III) stars with the most dramatic feedback and (2) accurate follow up of the temperature and volume filling factor of neutral, singly ionized, and doubly ionized regions, taking into account the distinct halo mass functions in those environments. We find the following general results. Massive Pop III stars determine the intergalactic medium metallicity and temperature, and the growth of spheroids and disks is self-regulated by that of massive black holes (MBHs) developed from the remnants of those stars. However, the properties of normal galaxies and active galactic nuclei appear to be quite insensitive to Pop III star properties due to the much higher yield of ordinary stars compared to Pop III stars and the dramatic growth of MBHs when normal galaxies begin to develop, which cause the memory loss of the initial conditions.
Resumo:
Ultrafast 2D NMR is a powerful methodology that allows recording of a 2D NMR spectrum in a fraction of second. However, due to the numerous non-conventional parameters involved in this methodology its implementation is no trivial task. Here, an optimized experimental protocol is carefully described to ensure efficient implementation of ultrafast NMR. The ultrafast spectra resulting from this implementation are presented based on the example of two widely used 2D NMR experiments, COSY and HSQC, obtained in 0.2 s and 41 s, respectively.
Resumo:
Direct measurements of Redox Potential (ORP) have been used to infer the degree of electrons availability in waters, wastewaters, sediments and soils. Although the interpretation of the results obtained in direct measurements is not trivial, this parameter is part of a list of compulsory determinations required by many Environmental State Agencies as well as consulting companies. Nonetheless, the vast majority of E H reported values are not corrected to the reference electrode used, what makes most of the data incomparable with the literature, and not suitable for a correct environmental diagnostics.
Resumo:
The conventional curriculum of Analytical Chemistry undergraduate courses emphasizes the introduction of techniques, methods and procedures used for instrumental analysis. All these concepts must be integrated into a sound conceptual framework to allow students to make appropriate decisions. Method calibration is one of the most critical parameters that has to be grasped since most analytical techniques depend on it for quantitative analysis. The conceptual understanding of calibration is not trivial for undergraduate students. External calibration is widely discussed during instrumental analysis courses. However, the understanding of the limitations of external calibration to correct some systematic errors is not directly derived from laboratory examples. The conceptual understanding of other calibration methods (standard addition, matrix matching, and internal standard) is imperative. The aim of this work is to present a simple experiment using grains (beans, corn and chickpeas) to explore different types of calibration methods.
Resumo:
In this Thesis I discuss the exact dynamics of simple non-Markovian systems. I focus on fundamental questions at the core of non-Markovian theory and investigate the dynamics of quantum correlations under non-Markovian decoherence. In the first context I present the connection between two different non-Markovian approaches, and compare two distinct definitions of non-Markovianity. The general aim is to characterize in exemplary cases which part of the environment is responsible for the feedback of information typical of non- Markovian dynamics. I also show how such a feedback of information is not always described by certain types of master equations commonly used to tackle non-Markovian dynamics. In the second context I characterize the dynamics of two qubits in a common non-Markovian reservoir, and introduce a new dynamical effect in a wellknown model, i.e., two qubits under depolarizing channels. In the first model the exact solution of the dynamics is found, and the entanglement behavior is extensively studied. The non-Markovianity of the reservoir and reservoirmediated-interaction between the qubits cause non-trivial dynamical features. The dynamical interplay between different types of correlations is also investigated. In the second model the study of quantum and classical correlations demonstrates the existence of a new effect: the sudden transition between classical and quantum decoherence. This phenomenon involves the complete preservation of the initial quantum correlations for long intervals of time of the order of the relaxation time of the system.
Resumo:
The present article shows that there are consistent and decidable many- valued systems of propositional logic which satisfy two or all the three criteria for non- trivial inconsistent theories by da Costa (1974). The weaker one of these paraconsistent system is also able to avoid a series of paradoxes which come up when classical logic is applied to empirical sciences. These paraconsistent systems are based on a 6- valued system of propositional logic for avoiding difficulties in several domains of empirical science (Weingartner (2009)).