816 resultados para penalty-based aggregation functions
Resumo:
ABSTRACT The paper discusses the dynamics of capital accumulation in Latin America economies. The hypothesis is that in these economies the role of the State is comparatively broader than in the economies of the centers of the capitalism by structural reasons. The argument is mainly based on Marx and Kalecki, besides historical elements of Latin America economies, particularly the Brazilian economy. Then the paper explores the dynamics consequences of this nature at the national levels, concluding that this condition gives a higher degree of instability.
Resumo:
Fire blight is an economically important disease of apples and pears that is caused by the
bacterium Erwinia amylovora. Control of the disease depends on limiting primaly blosson1
infection in the spring, and rapidly removing infected tissue. The possibility of using phages to
control E.amylovora populations has been suggested, but previous studies have. failed to show
high treatment efficacies. This work describes the development of a phage-based biopesticide
that controls E. amylovora populations under field conditions, and significantly reduces the
incidence of fire blight.
This work reports the first use ofPantoea agglomerans, a non-pathogenic relative ofE.
amylovora, as a carrier for E. amylovora.phages. Its role is to support a replicating population of
these phages on blossom surfaces during the period when the flowers are most susceptible to
infection. Seven phages and one carrier isolate were selected for field trials from existing
collections of 56 E. amylovora phages and 249 epiphytic orchard bacteria. Selection of the .
/'
phages and carrier was based on characteristics relevant to the production and field perfonnance
of a biopesticide: host range, genetic diversity, growth under the conditions of large-scale
production, and the ability to prevent E. amylovora from infecting pear blossoms. In planta
assays showed that both the phages and the carrier make significant contributions to reducirig the
development of fire blight symptoms in pear blossoms.
Field-scale phage production and purification methods were developed based on the
growth characteristics of the phages and bacteria in liquid culture, and on the survival of phages
in various liquid media.
Six of twelve phage-carrier biopesticide treatments caused statistically signiflcant reductions in disease incidence during orchard trials. Multiplex real-time PCR was used to
simultaneously monitor the phage, carrier, and pathogen populations over the course of selected
treatments. In all cases. the observed population dynamics of the biocontrol agents and the
pathogen were consistent with the success or failure of each treatment to control disease
incidence. In treatments exhibiting a significantly reduced incidel1ce of fire blight, the average
blossom population ofE.amylovora had been reduced to pre-experiment epiphytic levels. In
successful treatments the phages grew on the P. agglomerans carrier for 2 to 3 d after treatment
application. The phages then grew preferentially on the pathogen, once it was introduced into this
blossom ecosystem. The efficacy of the successful phage-based treatnlents was statistically
similar to that of streptomycin, which is the most effective bactericide currently available for fire
blight prevention.
The in planta behaviour ofE. amylovora was compared to that ofErwinia pyrifoliae, a
closely related species that causes fire blight-like synlptoms on pears in southeast Asia. Duplex
real-time PCR was used to monitor the population dynamics of both species on single blossonls.
E. amylovora exhibited a greater competitive fitness on Bartlett pear blossoms than E. pyrifoliae.
The genome ofErwinia phage
Resumo:
Age-related differences in information processing have often been explained through deficits in older adults' ability to ignore irrelevant stimuli and suppress inappropriate responses through inhibitory control processes. Functional imaging work on young adults by Nelson and colleagues (2003) has indicated that inferior frontal and anterior cingulate cortex playa key role in resolving interference effects during a delay-to-match memory task. Specifically, inferior frontal cortex appeared to be recruited under conditions of context interference while the anterior cingulate was associated with interference resolution at the stage of response selection. Related work has shown that specific neural activities related to interference resolution are not preserved in older adults, supporting the notion of age-related declines in inhibitory control (Jonides et aI., 2000, West et aI., 2004b). In this study the time course and nature of these inhibition-related processes were investigated in young and old adults using high-density ERPs collected during a modified Sternberg task. Participants were presented with four target letters followed by a probe that either did or did not match one of the target letters held in working memory. Inhibitory processes were evoked by manipulating the nature of cognitive conflict in a particular trial. Conflict in working memory was elicited through the presentation of a probe letter in immediately previous target sets. Response-based conflict was produced by presenting a negative probe that had just been viewed as a positive probe on the previous trial. Younger adults displayed a larger orienting response (P3a and P3b) to positive probes relative to a non-target baseline. Older adults produced the orienting P3a and 3 P3b waveforms but their responses did not differentiate between target and non-target stimuli. This age-related change in response to targetness is discussed in terms of "early selection/late correction" models of cognitive ageing. Younger adults also showed a sensitivity in their N450 response to different levels of interference. Source analysis of the N450 responses to the conflict trials of younger adults indicated an initial dipole in inferior frontal cortex and a subsequent dipole in anterior cingulate cortex, suggesting that inferior prefrontal regions may recruit the anterior cingulate to exert cognitive control functions. Individual older adults did show some evidence of an N450 response to conflict; however, this response was attenuated by a co-occurring positive deflection in the N450 time window. It is suggested that this positivity may reflect a form of compensatory activity in older adults to adapt to their decline in inhibitory control.
Resumo:
(A) Most azobenzene-based photoswitches require UV light for photoisomerization, which limit their applications in biological systems due to possible photodamage. Cyclic azobenzene derivatives, on the other hand, can undergo cis-trans isomerization when exposed to visible light. A shortened synthetic scheme was developed for the preparation of a building block containing cyclic azobenzene and D-threoninol (cAB-Thr). trans-Cyclic azobenzene was found to thermally isomerize back to the cis-form in a temperature-dependent manner. cAB-Thr was transformed into the corresponding phosphoramidite and subsequently incorporated into oligonucleotides by solid phase synthesis. Melting temperature measurement suggested that incorporation of cis-cAB into oligonucleotides destabilizes DNA duplexes, these findings corroborate with circular dichroism measurement. Finally, Fluorescent Energy Resonance Transfer experiments indicated that trans-cAB can be accommodated in DNA duplexes. (B) Inverse Electron Demand Diels-Alder reactions (IEDDA) between trans-olefins and tetrazines provide a powerful alternative to existing ligation chemistries due to its fast reaction rate, bioorthogonality and mutual orthogonality with other click reactions. In this project, an attempt was pursued to synthesize trans-cyclooctene building blocks for oligonucleotide labeling by reacting with BODIPY-tetrazine. Rel-(1R-4E-pR)-cyclooct-4-enol and rel-(1R,8S,9S,4E)-Bicyclo[6.1.0]non-4-ene-9-ylmethanol were synthesized and then transformed into the corresponding propargyl ether. Subsequent Sonogashira reactions between these propargylated compounds with DMT-protected 5-iododeoxyuridine failed to give the desired products. Finally a methodology was pursued for the synthesis of BODIPY-tetrazine conjugates that will be used in future IEDDA reactions with trans-cyclooctene modified oligonucleotides.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.
Resumo:
This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
An aggregation rule maps each profile of individual strict preference orderings over a set of alternatives into a social ordering over that set. We call such a rule strategyproof if misreporting one’s preference never produces a social ordering that is strictly between the original ordering and one’s own preference. After describing a few examples of manipulable rules, we study in some detail three classes of strategy-proof rules: (i)rules based on a monotonic alteration of the majority relation generated by the preference profile; (ii)rules improving upon a fixed status-quo; and (iii) rules generalizing the Condorcet-Kemeny aggregation method.
Resumo:
Les polymères sensibles à des stimuli ont été largement étudiés ces dernières années notamment en vue d’applications biomédicales. Ceux-ci ont la capacité de changer leurs propriétés de solubilité face à des variations de pH ou de température. Le but de cette thèse concerne la synthèse et l’étude de nouveaux diblocs composés de deux copolymères aléatoires. Les polymères ont été obtenus par polymérisation radicalaire contrôlée du type RAFT (reversible addition-fragmentation chain-transfer). Les polymères à bloc sont formés de monomères de méthacrylates et/ou d’acrylamides dont les polymères sont reconnus comme thermosensibles et sensible au pH. Premièrement, les copolymères à bloc aléatoires du type AnBm-b-ApBq ont été synthétisés à partir de N-n-propylacrylamide (nPA) et de N-ethylacrylamide (EA), respectivement A et B, par polymérisation RAFT. La cinétique de copolymérisation des poly(nPAx-co-EA1-x)-block-poly(nPAy-co-EA1-y) et leur composition ont été étudiées afin de caractériser et évaluer les propriétés physico-chimiques des copolymères à bloc aléatoires avec un faible indice de polydispersité . Leurs caractères thermosensibles ont été étudiés en solution aqueuse par spectroscopie UV-Vis, turbidimétrie et analyse de la diffusion dynamique de la lumière (DLS). Les points de trouble (CP) observés des blocs individuels et des copolymères formés démontrent des phases de transitions bien définies lors de la chauffe. Un grand nombre de macromolécules naturels démontrent des réponses aux stimuli externes tels que le pH et la température. Aussi, un troisième monomère, 2-diethylaminoethyl methacrylate (DEAEMA), a été ajouté à la synthèse pour former des copolymères à bloc , sous la forme AnBm-b-ApCq , et qui offre une double réponse (pH et température), modulable en solution. Ce type de polymère, aux multiples stimuli, de la forme poly(nPAx-co-DEAEMA1-x)-block-poly(nPAy-co-EA1-y), a lui aussi été synthétisé par polymérisation RAFT. Les résultats indiquent des copolymères à bloc aléatoires aux propriétés physico-chimiques différentes des premiers diblocs, notamment leur solubilité face aux variations de pH et de température. Enfin, le changement d’hydrophobie des copolymères a été étudié en faisant varier la longueur des séquences des blocs. Il est reconnu que la longueur relative des blocs affecte les mécanismes d’agrégation d’un copolymère amphiphile. Ainsi avec différents stimuli de pH et/ou de température, les expériences effectuées sur des copolymères à blocaléatoires de différentes longueurs montrent des comportements d’agrégation intéressants, évoluant sous différentes formes micellaires, d’agrégats et de vésicules.
Resumo:
Les protéines sont au coeur de la vie. Ce sont d'incroyables nanomachines moléculaires spécialisées et améliorées par des millions d'années d'évolution pour des fonctions bien définies dans la cellule. La structure des protéines, c'est-à-dire l'arrangement tridimensionnel de leurs atomes, est intimement liée à leurs fonctions. L'absence apparente de structure pour certaines protéines est aussi de plus en plus reconnue comme étant tout aussi cruciale. Les protéines amyloïdes en sont un exemple marquant : elles adoptent un ensemble de structures variées difficilement observables expérimentalement qui sont associées à des maladies neurodégénératives. Cette thèse, dans un premier temps, porte sur l'étude structurelle des protéines amyloïdes bêta-amyloïde (Alzheimer) et huntingtine (Huntington) lors de leur processus de repliement et d'auto-assemblage. Les résultats obtenus permettent de décrire avec une résolution atomique les interactions des ensembles structurels de ces deux protéines. Concernant la protéine bêta-amyloïde (AB), nos résultats identifient des différences structurelles significatives entre trois de ses formes physiologiques durant ses premières étapes d'auto-assemblage en environnement aqueux. Nous avons ensuite comparé ces résultats avec ceux obtenus au cours des dernières années par d'autres groupes de recherche avec des protocoles expérimentaux et de simulations variés. Des tendances claires émergent de notre comparaison quant à l'influence de la forme physiologique de AB sur son ensemble structurel durant ses premières étapes d'auto-assemblage. L'identification des propriétés structurelles différentes rationalise l'origine de leurs propriétés d'agrégation distinctes. Par ailleurs, l'identification des propriétés structurelles communes offrent des cibles potentielles pour des agents thérapeutiques empêchant la formation des oligomères responsables de la neurotoxicité. Concernant la protéine huntingtine, nous avons élucidé l'ensemble structurel de sa région fonctionnelle située à son N-terminal en environnement aqueux et membranaire. En accord avec les données expérimentales disponibles, nos résultats sur son repliement en environnement aqueux révèlent les interactions dominantes ainsi que l'influence sur celles-ci des régions adjacentes à la région fonctionnelle. Nous avons aussi caractérisé la stabilité et la croissance de structures nanotubulaires qui sont des candidats potentiels aux chemins d'auto-assemblage de la région amyloïde de huntingtine. Par ailleurs, nous avons également élaboré, avec un groupe d'expérimentateurs, un modèle détaillé illustrant les principales interactions responsables du rôle d'ancre membranaire de la région N-terminal, qui sert à contrôler la localisation de huntingtine dans la cellule. Dans un deuxième temps, cette thèse porte sur le raffinement d'un modèle gros-grain (sOPEP) et sur le développement d'un nouveau modèle tout-atome (aaOPEP) qui sont tous deux basés sur le champ de force gros-grain OPEP, couramment utilisé pour l'étude du repliement des protéines et de l'agrégation des protéines amyloïdes. L'optimisation de ces modèles a été effectuée dans le but d'améliorer les prédictions de novo de la structure de peptides par la méthode PEP-FOLD. Par ailleurs, les modèles OPEP, sOPEP et aaOPEP ont été inclus dans un nouveau code de dynamique moléculaire très flexible afin de grandement simplifier leurs développements futurs.
Resumo:
Transparent conducting oxides (TCO’s) have been known and used for technologically important applications for more than 50 years. The oxide materials such as In2O3, SnO2 and impurity doped SnO2: Sb, SnO2: F and In2O3: Sn (indium tin oxide) were primarily used as TCO’s. Indium based oxides had been widely used as TCO’s for the past few decades. But the current increase in the cost of indium and scarcity of this material created the difficulty in obtaining low cost TCO’s. Hence the search for alternative TCO material has been a topic of active research for the last few decades. This resulted in the development of various binary and ternary compounds. But the advantages of using binary oxides are the easiness to control the composition and deposition parameters. ZnO has been identified as the one of the promising candidate for transparent electronic applications owing to its exciting optoelectronic properties. Some optoelectronics applications of ZnO overlap with that of GaN, another wide band gap semiconductor which is widely used for the production of green, blue-violet and white light emitting devices. However ZnO has some advantages over GaN among which are the availability of fairly high quality ZnO bulk single crystals and large excitonic binding energy. ZnO also has much simpler crystal-growth technology, resulting in a potentially lower cost for ZnO based devices. Most of the TCO’s are n-type semiconductors and are utilized as transparent electrodes in variety of commercial applications such as photovoltaics, electrochromic windows, flat panel displays. TCO’s provide a great potential for realizing diverse range of active functions, novel functions can be integrated into the materials according to the requirement. However the application of TCO’s has been restricted to transparent electrodes, ii notwithstanding the fact that TCO’s are n-type semiconductors. The basic reason is the lack of p-type TCO, many of the active functions in semiconductor originate from the nature of pn-junction. In 1997, H. Kawazoe et al reported the CuAlO2 as the first p-type TCO along with the chemical design concept for the exploration of other p-type TCO’s. This has led to the fabrication of all transparent diode and transistors. Fabrication of nanostructures of TCO has been a focus of an ever-increasing number of researchers world wide, mainly due to their unique optical and electronic properties which makes them ideal for a wide spectrum of applications ranging from flexible displays, quantum well lasers to in vivo biological imaging and therapeutic agents. ZnO is a highly multifunctional material system with highly promising application potential for UV light emitting diodes, diode lasers, sensors, etc. ZnO nanocrystals and nanorods doped with transition metal impurities have also attracted great interest, recently, for their spin-electronic applications This thesis summarizes the results on the growth and characterization of ZnO based diodes and nanostructures by pulsed laser ablation. Various ZnO based heterojunction diodes have been fabricated using pulsed laser deposition (PLD) and their electrical characteristics were interpreted using existing models. Pulsed laser ablation has been employed to fabricate ZnO quantum dots, ZnO nanorods and ZnMgO/ZnO multiple quantum well structures with the aim of studying the luminescent properties.
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely