944 resultados para Almost Optimal Density Function
Resumo:
This dissertation has two almost unrelated themes: privileged words and Sturmian words. Privileged words are a new class of words introduced recently. A word is privileged if it is a complete first return to a shorter privileged word, the shortest privileged words being letters and the empty word. Here we give and prove almost all results on privileged words known to date. On the other hand, the study of Sturmian words is a well-established topic in combinatorics on words. In this dissertation, we focus on questions concerning repetitions in Sturmian words, reproving old results and giving new ones, and on establishing completely new research directions. The study of privileged words presented in this dissertation aims to derive their basic properties and to answer basic questions regarding them. We explore a connection between privileged words and palindromes and seek out answers to questions on context-freeness, computability, and enumeration. It turns out that the language of privileged words is not context-free, but privileged words are recognizable by a linear-time algorithm. A lower bound on the number of binary privileged words of given length is proven. The main interest, however, lies in the privileged complexity functions of the Thue-Morse word and Sturmian words. We derive recurrences for computing the privileged complexity function of the Thue-Morse word, and we prove that Sturmian words are characterized by their privileged complexity function. As a slightly separate topic, we give an overview of a certain method of automated theorem-proving and show how it can be applied to study privileged factors of automatic words. The second part of this dissertation is devoted to Sturmian words. We extensively exploit the interpretation of Sturmian words as irrational rotation words. The essential tools are continued fractions and elementary, but powerful, results of Diophantine approximation theory. With these tools at our disposal, we reprove old results on powers occurring in Sturmian words with emphasis on the fractional index of a Sturmian word. Further, we consider abelian powers and abelian repetitions and characterize the maximum exponents of abelian powers with given period occurring in a Sturmian word in terms of the continued fraction expansion of its slope. We define the notion of abelian critical exponent for Sturmian words and explore its connection to the Lagrange spectrum of irrational numbers. The results obtained are often specialized for the Fibonacci word; for instance, we show that the minimum abelian period of a factor of the Fibonacci word is a Fibonacci number. In addition, we propose a completely new research topic: the square root map. We prove that the square root map preserves the language of any Sturmian word. Moreover, we construct a family of non-Sturmian optimal squareful words whose language the square root map also preserves.This construction yields examples of aperiodic infinite words whose square roots are periodic.
Resumo:
Recent realistic high resolution modeling studies show a net increase of submesoscale activity in fall and winter when the mixed layer depth is at its maximum. This submesoscale activity increase is associated with a reduced deepening of the mixed layer. Both phenomena can be related to the development of mixed layer instabilities, which convert available potential energy into submesoscale eddy kinetic energy and contribute to a fast restratification by slumping the horizontal density gradient in the mixed layer. In the present work, the mixed layer formation and restratification was studied by uniformly cooling a fully turbulent zonal jet in a periodic channel at different resolutions, from eddy resolving (10 km) to submesoscale permitting (2 km). The effect of the submesoscale activity, highlighted by these different horizontal resolutions, was quantified in terms of mixed layer depth, restratification rate and buoyancy fluxes. Contrary to many idealized studies focusing on the restratification phase only, this study addresses a continuous event of mixed layer formation followed by its complete restratification. The robustness of the present results was established by ensemble simulations. The results show that, at higher resolution, when submesoscale starts to be resolved, the mixed layer formed during the surface cooling is significantly shallower and the total restratification almost three times faster. Such differences between coarse and fine resolution models are consistent with the submesoscale upward buoyancy flux, which balances the convection during the formation phase and accelerates the restratification once the surface cooling is stopped. This submesoscale buoyancy flux is active even below the mixed layer. Our simulations show that mesoscale dynamics also cause restratification, but on longer time scales. Finally, the spatial distribution of the mixed layer depth is highly heterogeneous in the presence of submesoscale activity, prompting the question of whether it is possible to parameterize submesoscale effects and their effects on the marine biology as a function of a spatially-averaged mixed layer depth.
Resumo:
Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
Resumo:
Context. In February-March 2014, the MAGIC telescopes observed the high-frequency peaked BL Lac 1ES 1011+496 (z=0.212) in flaring state at very-high energy (VHE, E>100GeV). The flux reached a level more than 10 times higher than any previously recorded flaring state of the source. Aims. Description of the characteristics of the flare presenting the light curve and the spectral parameters of the night-wise spectra and the average spectrum of the whole period. From these data we aim at detecting the imprint of the Extragalactic Background Light (EBL) in the VHE spectrum of the source, in order to constrain its intensity in the optical band. Methods. We analyzed the gamma-ray data from the MAGIC telescopes using the standard MAGIC software for the production of the light curve and the spectra. For the constraining of the EBL we implement the method developed by the H.E.S.S. collaboration in which the intrinsic energy spectrum of the source is modeled with a simple function (< 4 parameters), and the EBL-induced optical depth is calculated using a template EBL model. The likelihood of the observed spectrum is then maximized, including a normalization factor for the EBL opacity among the free parameters. Results. The collected data allowed us to describe the flux changes night by night and also to produce di_erential energy spectra for all nights of the observed period. The estimated intrinsic spectra of all the nights could be fitted by power-law functions. Evaluating the changes in the fit parameters we conclude that the spectral shape for most of the nights were compatible, regardless of the flux level, which enabled us to produce an average spectrum from which the EBL imprint could be constrained. The likelihood ratio test shows that the model with an EBL density 1:07 (-0.20,+0.24)stat+sys, relative to the one in the tested EBL template (Domínguez et al. 2011), is preferred at the 4:6 σ level to the no-EBL hypothesis, with the assumption that the intrinsic source spectrum can be modeled as a log-parabola. This would translate into a constraint of the EBL density in the wavelength range [0.24 μm,4.25 μm], with a peak value at 1.4 μm of λF_ = 12:27^(+2:75)_ (-2:29) nW m^(-2) sr^(-1), including systematics.
Resumo:
215 p.
Resumo:
Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.
Resumo:
The accumulation of mannosyl-glycerate (MG), the salinity stress response osmolyte of Thermococcales, was investigated as a function of hydrostatic pressure in Thermococcus barophilus strain MP, a hyperthermophilic, piezophilic archaeon isolated from the Snake Pit site (MAR), which grows optimally at 40 MPa. Strain MP accumulated MG primarily in response to salinity stress, but in contrast to other Thermococcales, MG was also accumulated in response to thermal stress. MG accumulation peaked for combined stresses. The accumulation of MG was drastically increased under sub-optimal hydrostatic pressure conditions, demonstrating that low pressure is perceived as a stress in this piezophile, and that the proteome of T. barophilus is low-pressure sensitive. MG accumulation was strongly reduced under supra-optimal pressure conditions clearly demonstrating the structural adaptation of this proteome to high hydrostatic pressure. The lack of MG synthesis only slightly altered the growth characteristics of two different MG synthesis deletion mutants. No shift to other osmolytes was observed. Altogether our observations suggest that the salinity stress response in T. barophilus is not essential and may be under negative selective pressure, similarly to what has been observed for its thermal stress response. Introduction
Resumo:
Bioactive extracts were obtained from powdered carob pulp through an ultrasound extraction process and then evaluated in terms of antioxidant activity. Ten minutes of ultrasonication at 375 Hz were the optimal conditions leading to an extract with the highest antioxidant effects. After its chemical characterization, which revealed the preponderance of gallotannins, the extract (free and microencapsulated) was incorporated in yogurts. The microspheres were prepared using an extract/sodium alginate ratio of 100/400 (mg mg(-1)) selected after testing different ratios. The yogurts with the free extract exhibited higher antioxidant activity than the samples added with the encapsulated extracts, showing the preserving role of alginate as a coating material. None of the forms significantly altered the yogurt's nutritional value. This study confirmed the efficiency of microencapsulation to stabilize functional ingredients in food matrices maintaining almost the structural integrity of polyphenols extracted from carob pulp and furthermore improving the antioxidant potency of the final product.
Resumo:
As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.
Resumo:
Ethylene is an essential plant hormone involved in nearly all stages of plant growth and development. EIN2 (ETHYLENE INSENSITIVE2) is a master positive regulator in the ethylene signaling pathway, consisting of an N-terminal domain and a C-terminal domain. The EIN2 N-terminal domain localizes to the endoplasmic reticulum (ER) membrane and shows sequence similarity to Nramp metal ion transporters. The cytosolic C-terminal domain is unique to plants and signals downstream. There have been several major gaps in our knowledge of EIN2 function. It was unknown how the ethylene signal gets relayed from the known upstream component CTR1 (CONSTITUTIVE RESPONSE1) a Ser/Thr kinase at the ER, to EIN2. How the ethylene signal was transduced from EIN2 to the next downstream component transcription factor EIN3 (ETHYLENE INSENSITIVE3) in the nucleus was also unknown. The N-terminal domain of EIN2 shows homology to Nramp metal ion transporters and whether EIN2 can also function as a metal transporter has been a question plaguing the ethylene field for almost two decades. Here, EIN2 was found to interact with the CTR1 protein kinase, leading to the discovery that CTR1 phosphorylates the C-terminal domain of EIN2 in Arabidopsis thaliana. Using tags at the termini of EIN2, it was deduced that in the presence of ethylene, the EIN2 C-terminal domain is cleaved and translocates into the nucleus, where it could somehow activate downstream ethylene responses. The EIN2 C-terminal domain interacts with nuclear proteins, RTE3 and EER5, which are components of the TREX-2 mRNA export complex, although the role of these interactions remains unclear. The EIN2 N-terminal domain was found to be capable of divalent metal transport when expressed in E. coli and S. cerevisiae leading to the hypothesis that metal transport plays a role in ethylene signaling. This hypothesis was tested using a novel missense allele, ein2 G36E, substituting a highly conserved residue that is required for metal transport in Nramp proteins. This G36E substitution did not disrupt metal ion transport of EIN2, but the ethylene insensitive phenotype of this mutant indicates that the EIN2 N-terminal domain is important for positively regulating the C-terminal domain. The defect of the ein2 G36E mutant does not prevent proper expression or subcellular localization, but might affect protein modifications. The ein2 G36E allele is partially dominant, mostly likely displaying haploinsufficiency. Overexpression of the EIN2 N-terminal domain in the ein2 G36E mutant did not rescue ethylene insensitivity, suggesting the N-terminal domain functions in cis to regulate the C-terminal domain. These findings advance our knowledge of EIN2, which is critical to understanding ethylene signaling.
Resumo:
Memory storage in the brain involves adjustment of the strength of existing synapses and formation of new neural networks. A key process underlying memory formation is synaptic plasticity, the ability of excitatory synapses to strengthen or weaken their connections in response to patterns of activity between their connected neurons. Synaptic plasticity is governed by the precise pattern of Ca²⁺ influx through postsynaptic N-methyl-D-aspartate-type glutamate receptors (NMDARs), which can lead to the activation of the small GTPases Ras and Rap. Differential activation of Ras and Rap acts to modulate synaptic strength by promoting the insertion or removal of 2-amino-3-(3-hydroxy-5-methyl-isoxazol-4-yl)propanoic acid receptors (AMPARs) from the synapse. Synaptic GTPase activating protein (synGAP) regulates AMPAR levels by catalyzing the inactivation of GTP-bound (active) Ras or Rap. synGAP is positioned in close proximity to the cytoplasmic tail regions of the NMDAR through its association with the PDZ domains of PSD-95. SynGAP’s activity is regulated by the prominent postsynaptic protein kinase, Ca²⁺/calmodulin-dependent protein kinase II (CaMKII) and cyclin-dependent kinase 5 (CDK5), a known binding partner of CaMKII. Modulation of synGAP’s activity by phosphorylation may alter the ratio of active Ras to Rap in spines, thus pushing the spine towards the insertion or removal of AMPARs, subsequently strengthening or weakening the synapse. To date, all biochemical studies of the regulation of synGAP activity by protein kinases have utilized impure preparations of membrane bound synGAP. Here we have clarified the effects of phosphorylation of synGAP on its Ras and Rap GAP activities by preparing and utilizing purified, soluble recombinant synGAP, Ras, Rap, CaMKII, CDK5, PLK2, and CaM. Using mass spectrometry, we have confirmed the presence of previously identified CaMKII and CDK5 sites in synGAP, and have identified novel sites of phosphorylation by CaMKII, CDK5, and PLK2. We have shown that the net effect of phosphorylation of synGAP by CaMKII, CDK5, and PLK2 is an increase in its GAP activity toward HRas and Rap1. In contrast, there is no effect on its GAP activity toward Rap2. Additionally, by assaying the GAP activity of phosphomimetic synGAP mutants, we have been able to hypothesize the effects of CDK5 phosphorylation at specific sites in synGAP. In the course of this work, we also found, unexpectedly, that synGAP is itself a Ca²⁺/CaM binding protein. While Ca²⁺/CaM binding does not directly affect synGAP activity, it causes a conformational change in synGAP that increases the rate of its phosphorylation and exposes additional phosphorylation sites that are inaccessible in the absence of Ca²⁺/CaM.
The postsynaptic density (PSD) is an electron-dense region in excitatory postsynaptic neurons that contains a high concentration of glutamate receptors, cytoskeletal proteins, and associated signaling enzymes. Within the PSD, three major classes of scaffolding molecules function to organize signaling enzymes and glutamate receptors. PDZ domains present in the Shank and PSD-95 scaffolds families serve to physically link AMPARs and NMDARs to signaling molecules in the PSD. Because of the specificity and high affinity of PDZ domains for their ligands, I reasoned that these interacting pairs could provide the core components of an affinity chromatography system, including affinity resins, affinity tags, and elution agents. I show that affinity columns containing the PDZ domains of PSD-95 can be used to purify active PDZ domain-binding proteins to very high purity in a single step. Five heterologously expressed neuronal proteins containing endogenous PDZ domain ligands (NMDAR GluN2B subunit Tail, synGAP, neuronal nitric oxide synthase PDZ domain, cysteine rich interactor of PDZ three and cypin) were purified using PDZ domain resin, with synthetic peptides having the sequences of cognate PDZ domain ligands used as elution agents. I also show that conjugation of PDZ domain-related affinity tags to Proteins Of Interest (POIs) that do not contain endogenous PDZ domains or ligands does not alter protein activity and enables purification of the POIs on PDZ domain-related affinity resins.
Resumo:
Research shows that executive function and social–behavioral adjustment during the preschool years are both associated with the successful acquisition of academic readiness abilities. However, studies bringing these constructs together in one investigation are lacking. This study addresses this gap by testing the extent to which social and behavioral adjustment mediated the association between executive function and academic readiness. Sixty-nine 63–76month old children, enrolled in the last semester of the preschool year, participated in the present study. Tasks were administered to measure executive function and preacademic abilities, and teachers rated preschoolers' social–behavioral adjustment. Hierarchical regression analyses revealed that social–behavioral adaptation was a significant mediator of the effect of executive function on academic readiness, even after controlling for maternal education and child verbal ability. These findings extend prior research and suggest that executive function contributes to early academic achievement by influencing preschoolers' opportunities to be engaged in optimal social learning activities.
Resumo:
Although the primary objective on designing a structure is to support the external loads, the achievement of an optimal layout that reduces all costs associated with the structure is an aspect of increasing interest. The problem of finding the optimal layout for bridgelike structures subjected to a uniform load is considered. The problem is formulated following a theory on economy of frame structures, using the stress volume as the objective function and including the selection of appropriate values for statically indeterminate reactions. It is solved in a function space of finite dimension instead of using a general variational approach, obtaining near-optimal solutions. The results obtained with this profitable strategy are very close to the best layouts known to date, with differences of less than 2% for the stress volume, but with a simpler layout that can be recognized in some real bridges. This strategy could be a guide to preliminary design of bridges subject to a wide class of costs.
Resumo:
Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.