909 resultados para External constraint
Resumo:
de Lima-Pardini AC, Papegaaij S, Cohen RG, Teixeira LA, Smith BA, Horak FB. The interaction of postural and voluntary strategies for stability in Parkinson's disease. J Neurophysiol 108: 1244-1252, 2012. First published June 6, 2012; doi:10.1152/jn.00118.2012.-This study assessed the effects of stability constraints of a voluntary task on postural responses to an external perturbation in subjects with Parkinson's disease (PD) and healthy elderly participants. Eleven PD subjects and twelve control subjects were perturbed with backward surface translations while standing and performing two versions of a voluntary task: holding a tray with a cylinder placed with the flat side down [low constraint (LC)] or with the rolling, round side down [high constraint (HC)]. Participants performed alternating blocks of LC and HC trials. PD participants accomplished the voluntary task as well as control subjects, showing slower tray velocity in the HC condition compared with the LC condition. However, the latency of postural responses was longer in the HC condition only for control subjects. Control subjects presented different patterns of hip-shoulder coordination as a function of task constraint, whereas PD subjects had a relatively invariant pattern. Initiating the experiment with the HC task led to 1) decreased postural stability in PD subjects only and 2) reduced peak hip flexion in control subjects only. These results suggest that PD impairs the capacity to adapt postural responses to constraints imposed by a voluntary task.
Resumo:
We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.
Resumo:
A phylogenetic analysis of a fragment of the mitochondrial gene 16S was used to test the monophyletic status of Potimirim. Existing doubts on the taxonomic status of brasiliana (once P glabra) and P potimirim (once P mexicana) were clarified. Potimirim mexicana and P potimirim are distinct species according to molecular data and appendix masculina morphology. A new species (Potimirim sp. 1) from Puerto Rico was revealed with molecular data, and it is evolutionarily related to P potimirim and P mexicana according to our analysis. We found out three distinct species under the name P glabra. Then, we recommend the application of the name P glabra for the populations of the Pacific slope of Central America and revalidation of P brasiliana for the Brazilian ones. The need for a new name to those "P glabra" of the Caribbean is highlighted, and it was provisionally referred as Potimirim sp. 2. The ontogenetic (juveniles to adults) development of the appendix masculina of P brasiliana was observed and compared to the other species of Potimirim (adults). In the light of our phylogenetic hypothesis, we postulate a pattern of character addition for the evolution of the appendix masculina of Potimirim. This hypothesis is plausible for two key reasons. First. Potimirim is a monophyletic group according to our hypothesis. Second, the shape of appendix masculina found in adults of P. americana is similar and comparable to those found in the earliest juvenile stages of P brasiliana, a derived species according to our phylogeny (P americana, ((P mexicana, Potimirim sp. 1. P potimirim), (P glabra, (brasiliana, Potimirim sp. 2)))). As so, the basal P americana retain the ancestral morphological state of the appendix masculina when compared to the other species of Potimirim. In our interpretation the ontogeny of the appendix masculina recapitulated the proposed phylogeny, giving further support to it.
Resumo:
We study general properties of the Landau-gauge Gribov ghost form factor sigma(p(2)) for SU(N-c) Yang-Mills theories in the d-dimensional case. We find a qualitatively different behavior for d = 3, 4 with respect to the d = 2 case. In particular, considering any (sufficiently regular) gluon propagator D(p(2)) and the one-loop-corrected ghost propagator, we prove in the 2d case that the function sigma(p(2)) blows up in the infrared limit p -> 0 as -D(0) ln(p(2)). Thus, for d = 2, the no-pole condition sigma(p(2)) < 1 (for p(2) > 0) can be satisfied only if the gluon propagator vanishes at zero momentum, that is, D(0) = 0. On the contrary, in d = 3 and 4, sigma(p(2)) is finite also if D(0) > 0. The same results are obtained by evaluating the ghost propagator G(p(2)) explicitly at one loop, using fitting forms for D(p(2)) that describe well the numerical data of the gluon propagator in two, three and four space-time dimensions in the SU(2) case. These evaluations also show that, if one considers the coupling constant g(2) as a free parameter, the ghost propagator admits a one-parameter family of behaviors (labeled by g(2)), in agreement with previous works by Boucaud et al. In this case the condition sigma(0) <= 1 implies g(2) <= g(c)(2), where g(c)(2) is a "critical" value. Moreover, a freelike ghost propagator in the infrared limit is obtained for any value of g(2) smaller than g(c)(2), while for g(2) = g(c)(2) one finds an infrared-enhanced ghost propagator. Finally, we analyze the Dyson-Schwinger equation for sigma(p(2)) and show that, for infrared-finite ghost-gluon vertices, one can bound the ghost form factor sigma(p(2)). Using these bounds we find again that only in the d = 2 case does one need to impose D(0) = 0 in order to satisfy the no-pole condition. The d = 2 result is also supported by an analysis of the Dyson-Schwinger equation using a spectral representation for the ghost propagator. Thus, if the no-pole condition is imposed, solving the d = 2 Dyson-Schwinger equations cannot lead to a massive behavior for the gluon propagator. These results apply to any Gribov copy inside the so-called first Gribov horizon; i.e., the 2d result D(0) = 0 is not affected by Gribov noise. These findings are also in agreement with lattice data.
Resumo:
Introduction: The purpose of this study was to analyze the influence of ultrasonic activation of calcium hydroxide (CH) pastes on pH and calcium release in simulated external root resorptions. Methods: Forty-six bovine incisors had their canals cleaned and instrumented, and defects were created in the external middle third of the roots, which were then used for the study. The teeth were externally made impermeable, except for the defected area, and divided into the following 4 groups containing 10 samples each according to the CH paste and the use or not of the ultrasonic activation: group 1: propylene glycol without ultrasonic activation, group 2: distilled water without ultrasonic activation, group 3: propylene glycol with ultrasonic activation, and group 4: distilled water with ultrasonic activation. After filling the canals with the paste, the teeth were restored and individually immersed into flasks with ultrapure water. The samples were placed into other flasks after 7, 15, and 30 days so that the water pH level could be measured by means of a pH meter. Calcium release was measured by means of an atomic absorption spectrophotometer. Six teeth were used as controls. The results were statistically compared using the Kruskal-Wallis and Mann-Whitney U tests (P < .05). Results: For all periods analyzed, the pH level was found to be higher when the CH paste was activated with ultrasound. Calcium release was significantly greater (P < .05) using ultrasonic activation after 7 and 30 days. Conclusions: The ultrasonic activation of CH pastes favored a higher pH level and calcium release in simulated external root resorptions. (J Endod 2012;38:834-837)
Resumo:
The study of the effects of spatially uniform fields on the steady-state properties of Axelrod's model has yielded plenty of counterintuitive results. Here, we reexamine the impact of this type of field for a selection of parameters such that the field-free steady state of the model is heterogeneous or multicultural. Analyses of both one- and two-dimensional versions of Axelrod's model indicate that the steady state remains heterogeneous regardless of the value of the field strength. Turning on the field leads to a discontinuous decrease on the number of cultural domains, which we argue is due to the instability of zero-field heterogeneous absorbing configurations. We find, however, that spatially nonuniform fields that implement a consensus rule among the neighborhood of the agents enforce homogenization. Although the overall effects of the fields are essentially the same irrespective of the dimensionality of the model, we argue that the dimensionality has a significant impact on the stability of the field-free homogeneous steady state.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.
Resumo:
Fluoridation of the public water supplies is recognized as among the top ten public health achievements of the twentieth century. However, the positive aspects of this measure depend on the maintenance of fluoride concentrations within adequate levels. To report the results of seven years of external control of the fluoride (F) concentrations in the public water supply in Bauru, SP, Brazil in an attempt to verify, on the basis of risk/benefit balance, whether the levels are appropriate. From March 2004 to February 2011, 60 samples were collected every month from the 19 supply sectors of the city, totaling 4,641 samples. F concentrations in water samples were determined in duplicate, using an ion-specific electrode (Orion 9609) coupled to a potentiometer after buffering with TISAB II. After the analysis, the samples were classified according to the best risk-benefit adjustment. Means (±standard deviation) of F concentrations ranged between 0.73±0.06 and 0.81±0.10 mg/L for the different sectors during the seven years. The individual values ranged between 0.03 and 2.63 mg/L. The percentages of the samples considered “low risk” for dental fluorosis development and of “maximum benefit” for dental caries prevention (0.55-0.84 mg F/L) in the first, second, third, fourth, fifth, sixth, and seventh years of the study were 82.0, 58.5, 37.4, 61.0, 89.9, 77.3, and 72.4%, respectively, and 69.0% for the entire period. Fluctuations of F levels were found in the public water supply in Bauru during the seven years of evaluation. These results suggest that external monitoring of water fluoridation by an independent assessor should be implemented in cities where there is adjusted fluoridation. This measure should be continued in order to verify that fluoride levels are suitable and, if not, to provide support for the appropriate adjustments
Resumo:
The Brazilian network for genotyping is composed of 21 laboratories that perform and analyze genotyping tests for all HIV-infected patients within the public system, performing approximately 25,000 tests per year. We assessed the interlaboratory and intralaboratory reproducibility of genotyping systems by creating and implementing a local external quality control evaluation. Plasma samples from HIV-1-infected individuals (with low and intermediate viral loads) or RNA viral constructs with specific mutations were used. This evaluation included analyses of sensitivity and specificity of the tests based on qualitative and quantitative criteria, which scored laboratory performance on a 100-point system. Five evaluations were performed from 2003 to 2008, with 64% of laboratories scoring over 80 points in 2003, 81% doing so in 2005, 56% in 2006, 91% in 2007, and 90% in 2008 (Kruskal-Wallis, p = 0.003). Increased performance was aided by retraining laboratories that had specific deficiencies. The results emphasize the importance of investing in laboratory training and interpretation of DNA sequencing results, especially in developing countries where public (or scarce) resources are used to manage the AIDS epidemic.
Resumo:
Conservatism is a central theme of organismic evolution. Related species share characteristics due to their common ancestry. Some concern have been raised among evolutionary biologists, whether such conservatism is an expression of natural selection or of a constrained ability to adapt. This thesis explores adaptations and constraints within the plant reproductive phase, particularly in relation to the evolution of fleshy fruit types (berries, drupes, etc.) and the seasonal timing of flowering and fruiting. The different studies were arranged along a hierarchy of scale, with general data sets sampled among seed plants at the global scale, through more specific analyses of character evolution within the genus Rhamnus s.l. L. (Rhamnaceae), to descriptive and experimental field studies in a local population of Frangula alnus (Rhamnaceae). Apart from the field study, this thesis is mainly based on comparative methods explicitly incorporating phylogenetic relationships. The comparative study of Rhamnus s.l. species included the reconstruction of phylogenetic hypotheses based on DNA sequences. Among geographically overlapping sister clades, biotic pollination was not correlated with higher species richness when compared to wind pollinated plants. Among woody plants, clades characterized by fleshy fruit types were more species rich than their dry-fruited sister clades, suggesting that the fleshy fruit is a key innovation in woody habitats. Moreover, evolution of fleshy fruits was correlated with a change to more closed (darker) habitats. An independent contrast study within Rhamnus s.l. documented allometric relations between plant and fruit size. As a phylogenetic constraint, allometric effects must be considered weak or non-existent, though, as they did not prevail among different subclades within Rhamnus s.l. Fruit size was correlated with seed size and seed number in F. alnus. This thesis suggests that frugivore selection on fleshy fruit may be important by constraining the upper limits of fruit size, when a plant lineage is colonizing (darker) habitats where larger seed size is adaptive. Phenological correlations with fruit set, dispersal, and seed size in F. alnus, suggested that the evolution of reproductive phenology is constrained by trade-offs and partial interdependences between flowering, fruiting, dispersal, and recruitment phases. Phylogenetic constraints on the evolution of phenology were indicated by a lack of correlation between flowering time and seasonal length within Rhamnus cathartica and F. alnus, respectively. On the other hand, flowering time was correlated with seasonal length among Rhamnus s.l. species. Phenological differences between biotically and wind pollinated angiosperms also suggested adaptive change in reproductive phenology.
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
This Doctoral Dissertation is triggered by an emergent trend: firms are increasingly referring to investments in corporate venture capital (CVC) as means to create new competencies and foster the search for competitive advantage through the use of external resources. CVC is generally defined as the practice by non-financial firms of placing equity investments in entrepreneurial companies. Thus, CVC can be interpreted (i) as a key component of corporate entrepreneurship - acts of organizational creation, renewal, or innovation that occur within or outside an existing organization– and (ii) as a particular form of venture capital (VC) investment where the investor is not a traditional and financial institution, but an established corporation. My Dissertation, thus, simultaneously refers to two streams of research: corporate strategy and venture capital. In particular, I directed my attention to three topics of particular relevance for better understanding the role of CVC. In the first study, I moved from the consideration that competitive environments with rapid technological changes increasingly force established corporations to access knowledge from external sources. Firms, thus, extensively engage in external business development activities through different forms of collaboration with partners. While the underlying process common to these mechanisms is one of knowledge access, they are substantially different. The aim of the first study is to figure out how corporations choose among CVC, alliance, joint venture and acquisition. I addressed this issue adopting a multi-theoretical framework where the resource-based view and real options theory are integrated. While the first study mainly looked into the use of external resources for corporate growth, in the second work, I combined an internal and an external perspective to figure out the relationship between CVC investments (exploiting external resources) and a more traditional strategy to create competitive advantage, that is, corporate diversification (based on internal resources). Adopting an explorative lens, I investigated how these different modes to renew corporate current capabilities interact to each other. More precisely, is CVC complementary or substitute to corporate diversification? Finally, the third study focused on the more general field of VC to investigate (i) how VC firms evaluate the patent portfolios of their potential investee companies and (ii) whether the ability to evaluate technology and intellectual property varies depending on the type of investors, in particular for what concern the distinction between specialized versus generalist VCs and independent versus corporate VCs. This topic is motivated by two observations. First, it is not clear yet which determinants of patent value are primarily considered by VCs in their investment decisions. Second, VCs are not all alike in terms of technological experiences and these differences need to be taken into account.
Resumo:
Nel lavoro di tesi qui presentato si indaga l'applicazione di tecniche di apprendimento mirate ad una più efficiente esecuzione di un portfolio di risolutore di vincoli (constraint solver). Un constraint solver è un programma che dato in input un problema di vincoli, elabora una soluzione mediante l'utilizzo di svariate tecniche. I problemi di vincoli sono altamente presenti nella vita reale. Esempi come l'organizzazione dei viaggi dei treni oppure la programmazione degli equipaggi di una compagnia aerea, sono tutti problemi di vincoli. Un problema di vincoli è formalizzato da un problema di soddisfacimento di vincoli(CSP). Un CSP è descritto da un insieme di variabili che possono assumere valori appartenenti ad uno specico dominio ed un insieme di vincoli che mettono in relazione variabili e valori assumibili da esse. Una tecnica per ottimizzare la risoluzione di tali problemi è quella suggerita da un approccio a portfolio. Tale tecnica, usata anche in am- biti come quelli economici, prevede la combinazione di più solver i quali assieme possono generare risultati migliori di un approccio a singolo solver. In questo lavoro ci preoccupiamo di creare una nuova tecnica che combina un portfolio di constraint solver con tecniche di machine learning. Il machine learning è un campo di intelligenza articiale che si pone l'obiettivo di immettere nelle macchine una sorta di `intelligenza'. Un esempio applicativo potrebbe essere quello di valutare i casi passati di un problema ed usarli in futuro per fare scelte. Tale processo è riscontrato anche a livello cognitivo umano. Nello specico, vogliamo ragionare in termini di classicazione. Una classicazione corrisponde ad assegnare ad un insieme di caratteristiche in input, un valore discreto in output, come vero o falso se una mail è classicata come spam o meno. La fase di apprendimento sarà svolta utilizzando una parte di CPHydra, un portfolio di constraint solver sviluppato presso la University College of Cork (UCC). Di tale algoritmo a portfolio verranno utilizzate solamente le caratteristiche usate per descrivere determinati aspetti di un CSP rispetto ad un altro; queste caratteristiche vengono altresì dette features. Creeremo quindi una serie di classicatori basati sullo specifico comportamento dei solver. La combinazione di tali classicatori con l'approccio a portfolio sara nalizzata allo scopo di valutare che le feature di CPHydra siano buone e che i classicatori basati su tali feature siano affidabili. Per giusticare il primo risultato, eettueremo un confronto con uno dei migliori portfolio allo stato dell'arte, SATzilla. Una volta stabilita la bontà delle features utilizzate per le classicazioni, andremo a risolvere i problemi simulando uno scheduler. Tali simulazioni testeranno diverse regole costruite con classicatori precedentemente introdotti. Prima agiremo su uno scenario ad un processore e successivamente ci espanderemo ad uno scenario multi processore. In questi esperimenti andremo a vericare che, le prestazioni ottenute tramite l'applicazione delle regole create appositamente sui classicatori, abbiano risultati migliori rispetto ad un'esecuzione limitata all'utilizzo del migliore solver del portfolio. I lavoro di tesi è stato svolto in collaborazione con il centro di ricerca 4C presso University College Cork. Su questo lavoro è stato elaborato e sottomesso un articolo scientico alla International Joint Conference of Articial Intelligence (IJCAI) 2011. Al momento della consegna della tesi non siamo ancora stati informati dell'accettazione di tale articolo. Comunque, le risposte dei revisori hanno indicato che tale metodo presentato risulta interessante.
Resumo:
Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti.