891 resultados para theory of the dependence of resource
Resumo:
Perhaps due to its origins in a production scheduling software called Optimised Production Technology (OPT), plus the idea of focusing on system constraints, many believe that the Theory of Constraints (TOC) has a vocation for optimal solutions. Those who assess TOC according to this perspective indicate that it guarantees an optimal solution only in certain circumstances. In opposition to this view and founded on a numeric example of a production mix problem, this paper shows, by means of TOC assumptions, why the TOC should not be compared to methods intended to seek optimal or the best solutions, but rather sufficiently good solutions, possible in non-deterministic environments. Moreover, we extend the range of relevant literature on product mix decision by introducing a heuristic based on the uniquely identified work that aims at achieving feasible solutions according to the TOC point of view. The heuristic proposed is tested on 100 production mix problems and the results are compared with the responses obtained with the use of Integer Linear Programming. The results show that the heuristic gives good results on average, but performance falls sharply in some situations. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Presents a citation analysis of indexing research in the two Proceedings. Understanding that there are different traditions of research into indexing, we look for evidence of this in the citing and cited authors. Three areas of cited and citing authors surface, after applying Price's elitism analysis, each roughly corresponding to geographic distributions.
Resumo:
We consider a N - S box system consisting of a rectangular conductor coupled to a superconductor. The Green functions are constructed by solving the Bogoliubov-de Gennes equations at each side of the interface, with the pairing potential described by a step-like function. Taking into account the mismatch in the Fermi wave number and the effective masses of the normal metal - superconductor and the tunnel barrier at the interface, we use the quantum section method in order to find the exact energy Green function yielding accurate computed eigenvalues and the density of states. Furthermore, this procedure allow us to analyze in detail the nontrivial semiclassical limit and examine the range of applicability of the Bohr-Sommerfeld quantization method.
Resumo:
The passage of the Native American Graves Protection and Repatriation Act (NAGPRA) in 1991 significantly changed the way archaeology would be done in the United States. This act was presaged by growing complaints and resentment directed at the scientific community by Native Americans over the treatment of their ancestral remains. Many of the underlying issues came to a head with the discovery and subsequent court battles over the 9,200-year-old individual commonly known as Kennewick Man. This had a galvanizing effect on the discipline, not only perpetuating the sometimes adversarial relationship between archaeologists and Native Americans, but also creating a rift between those archaeologists who understood Native American concerns and those who saw their ancestral skeletal remains representing the legacy of humankind and thus belonging to everyone. Similar scenarios have emerged in Australia.
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
This study tested a dynamic field theory (DFT) of spatial working memory and an associated spatial precision hypothesis (SPH). Between 3 and 6 years of age, there is a qualitative shift in how children use reference axes to remember locations: 3-year-olds’ spatial recall responses are biased toward reference axes after short memory delays, whereas 6-year-olds’ responses are biased away from reference axes. According to the DFT and the SPH, quantitative improvements over development in the precision of excitatory and inhibitory working memory processes lead to this qualitative shift. Simulations of the DFT in Experiment 1 predict that improvements in precision should cause the spatial range of targets attracted toward a reference axis to narrow gradually over development, with repulsion emerging and gradually increasing until responses to most targets show biases away from the axis. Results from Experiment 2 with 3- to 5-year-olds support these predictions. Simulations of the DFT in Experiment 3 quantitatively fit the empirical results and offer insights into the neural processes underlying this developmental change.
Resumo:
Este artigo propõe que a semiótica peirceana pode oferecer bases tanto lógicas quanto epistemológicas para a busca de uma teoria geral da comunicação. No entanto, o desenvolvimento de uma teoria semiótica da comunicação depende, em primeiro lugar, de uma melhor compreensão dos aspectos formais do signo, tarefa atribuída por Peirce à gramática, o primeiro ramo de sua semiótica. Nós apresentamos uma análise das relações do signo, revelando um aspecto não trabalhado por Peirce, ampliando seu número para onze. Este novo aspecto é a relação triádica entre signo, objeto dinâmico e interpretante dinâmico (S-OD-ID). Nós defendemos que esta relação é essencial para a compreensão da comunicação como semiose, por dar conta da repetição ou redundância do signo comunicativo, quando se cria ou transmite informação. O artigo pretende dar um passo a mais na direção de uma teoria da comunicação verdadeiramente universal, através do vínculo entre a semiótica peirceana e a moderna filosofia da linguagem.
Resumo:
We review recent progress in the mathematical theory of quantum disordered systems: the Anderson transition, including some joint work with Marchetti, the (quantum and classical) Edwards-Anderson (EA) spin-glass model and return to equilibrium for a class of spin-glass models, which includes the EA model initially in a very large transverse magnetic field. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4770066]
Resumo:
Some atomic multipoles (charges, dipoles and quadrupoles) from the Quantum Theory of Atoms in Molecules (QTAIM) and CHELPG charges are used to investigate interactions between a proton and a molecule (F2, Cl2, BF, AlF, BeO, MgO, LiH, H2CO, NH3, PH3, BF3, and CO2). Calculations were done at the B3LYP/6-311G(3d,3p) level. The main aspect of this work is the investigation of polarization effects over electrostatic potentials and atomic multipoles along a medium to long range of interaction distances. Large electronic charge fluxes and polarization changes are induced by a proton mainly when this positive particle approaches the least electronegative atom of diatomic heteronuclear molecules. The search for simple equations to describe polarization on electrostatic potentials from QTAIM quantities resulted in linear relations with r-4 (r is the interaction distance) for many cases. Moreover, the contribution from atomic dipoles to these potentials is usually the most affected contribution by polarization what reinforces the need for these dipoles to a minimal description of purely electrostatic interactions. Finally, CHELPG charges provide a description of polarization effects on electrostatic potentials that is in disagreement with physical arguments for certain of these molecules. (c) 2012 Wiley Periodicals, Inc.
Resumo:
In this paper, we address the problem of defining the product mix in order to maximise a system's throughput. This problem is well known for being NP-Complete and therefore, most contributions to the topic focus on developing heuristics that are able to obtain good solutions for the problem in a short CPU time. In particular, constructive heuristics are available for the problem such as that by Fredendall and Lea, and by Aryanezhad and Komijan. We propose a new constructive heuristic based on the Theory of Constraints and the Knapsack Problem. The computational results indicate that the proposed heuristic yields better results than the existing heuristic.
Resumo:
The floating-body-RAM sense margin and retention-time dependence on the gate length is investigated in UTBOX devices using BJT programming combined with a positive back bias (so-called V th feedback). It is shown that the sense margin and the retention time can be kept constant versus the gate length by using a positive back bias. Nevertheless, below a critical L, there is no room for optimization, and the memory performances suddenly drop. The mechanism behind this degradation is attributed to GIDL current amplification by the lateral bipolar transistor with a narrow base. The gate length can be further scaled using underlap junctions.
Resumo:
The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.