846 resultados para GENERAL-THEORY
Resumo:
Pós-graduação em Direito - FCHS
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Remanufacturing is the process of rebuilding used products that ensures that the quality of remanufactured products is equivalent to that of new ones. Although the theme is gaining ground, it is still little explored due to lack of knowledge, the difficulty of visualizing it systemically, and implementing it effectively. Few models treat remanufacturing as a system. Most of the studies still treated remanufacturing as an isolated process, preventing it from being seen in an integrated manner. Therefore, the aim of this work is to organize the knowledge about remanufacturing, offering a vision of remanufacturing system and contributing to an integrated view about the theme. The methodology employed was a literature review, adopting the General Theory of Systems to characterize the remanufacturing system. This work consolidates and organizes the elements of this system, enabling a better understanding of remanufacturing and assisting companies in adopting the concept.
Resumo:
In a previous paper, we connected the phenomenological noncommutative inflation of Alexander, Brandenberger and Magueijo [ Phys. Rev. D 67 081301 (2003)] and Koh and Brandenberger [ J. Cosmol. Astropart Phys. 2007 21 ()] with the formal representation theory of groups and algebras and analyzed minimal conditions that the deformed dispersion relation should satisfy in order to lead to a successful inflation. In that paper, we showed that elementary tools of algebra allow a group-like procedure in which even Hopf algebras (roughly the symmetries of noncommutative spaces) could lead to the equation of state of inflationary radiation. Nevertheless, in this paper, we show that there exists a conceptual problem with the kind of representation that leads to the fundamental equations of the model. The problem comes from an incompatibility between one of the minimal conditions for successful inflation (the momentum of individual photons being bounded from above) and the Fock-space structure of the representation which leads to the fundamental inflationary equations of state. We show that the Fock structure, although mathematically allowed, would lead to problems with the overall consistency of physics, like leading to a problematic scattering theory, for example. We suggest replacing the Fock space by one of two possible structures that we propose. One of them relates to the general theory of Hopf algebras (here explained at an elementary level) while the other is based on a representation theorem of von Neumann algebras (a generalization of the Clebsch-Gordan coefficients), a proposal already suggested by us to take into account interactions in the inflationary equation of state.
Resumo:
Il presente lavoro tratta lo studio dei fenomeni aeroelastici di interazione fra fluido e struttura, con il fine di provare a simularli mediante l’ausilio di un codice agli elementi finiti. Nel primo capitolo sono fornite alcune nozioni di fluidodinamica, in modo da rendere chiari i passaggi teorici fondamentali che portano alle equazioni di Navier-Stokes governanti il moto dei fluidi viscosi. Inoltre è illustrato il fenomeno della formazione di vortici a valle dei corpi tozzi dovuto alla separazione dello strato limite laminare, con descrizione anche di alcuni risultati ottenuti dalle simulazioni numeriche. Nel secondo capitolo vengono presi in rassegna i principali fenomeni di interazione fra fluido e struttura, cercando di metterne in luce le fondamenta della trattazione analitica e le ipotesi sotto le quali tale trattazione è valida. Chiaramente si tratta solo di una panoramica che non entra in merito degli sviluppi della ricerca più recente ma fornisce le basi per affrontare i vari problemi di instabilità strutturale dovuti a un particolare fenomeno di interazione con il vento. Il terzo capitolo contiene una trattazione più approfondita del fenomeno di instabilità per flutter. Tra tutti i fenomeni di instabilità aeroelastica delle strutture il flutter risulta il più temibile, soprattutto per i ponti di grande luce. Per questo si è ritenuto opportuno dedicargli un capitolo, in modo da illustrare i vari procedimenti con cui si riesce a determinare analiticamente la velocità critica di flutter di un impalcato da ponte, a partire dalle funzioni sperimentali denominate derivate di flutter. Al termine del capitolo è illustrato il procedimento con cui si ricavano sperimentalmente le derivate di flutter di un impalcato da ponte. Nel quarto capitolo è presentato l’esempio di studio dell’impalcato del ponte Tsing Ma ad Hong Kong. Sono riportati i risultati analitici dei calcoli della velocità di flutter e di divergenza torsionale dell’impalcato e i risultati delle simulazioni numeriche effettuate per stimare i coefficienti aerodinamici statici e il comportamento dinamico della struttura soggetta all’azione del vento. Considerazioni e commenti sui risultati ottenuti e sui metodi di modellazione numerica adottati completano l’elaborato.
Resumo:
This dissertation deals with the problems and the opportunities of a semiotic approach to perception. Is perception, seen as the ability to detect and articulate an coherent picture of the surrounding environment, describable in semiotic terms? Is it possibile, for a discipline wary of any attempt to reduce semiotic meaning to a psychological and naturalized issue, to come to terms with the cognitive, automatic and genetically hard-wired specifics of our perceptive systems? In order to deal with perceptive signs, is it necessary to modify basic assumptions in semiotics, or can we simply extend the range of our conceptual instruments and definitions? And what if perception is a wholly different semiotic machinery, to be considered as sui generis, but nonetheless interesting for a general theory of semiotics? By exposing the major ideas put forward by the main thinkers in the semiotic field, Mattia de Bernardis gives a comprehensive picture of the theoretical situation, adding to the classical dichotomy between structuralist and interpretative semiotics another distinction, that between homogeneist and etherogeneist theories of perception. Homogeneist semioticians see perception as one of many semiotic means of sign production, totally similar to the other ones, while heterogeneist semioticians consider perceptive meaning as essentially different from normal semiotic meaning, so much so that it requires new methods and ideas to be analyzed. The main example of etherogeneist approach to perception in semiotic literature, Umberto Eco’s “primary semiosis” is then presented, critically examined and eventually rejected and the homogeneist stance is affirmed as the most promising path towards a semiotic theory of perception.
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
In dieser Arbeit wird eine Klasse von stochastischen Prozessen untersucht, die eine abstrakte Verzweigungseigenschaft besitzen. Die betrachteten Prozesse sind homogene Markov-Prozesse in stetiger Zeit mit Zuständen im mehrdimensionalen reellen Raum und dessen Ein-Punkt-Kompaktifizierung. Ausgehend von Minimalforderungen an die zugehörige Übergangsfunktion wird eine vollständige Charakterisierung der endlichdimensionalen Verteilungen mehrdimensionaler kontinuierlicher Verzweigungsprozesse vorgenommen. Mit Hilfe eines erweiterten Laplace-Kalküls wird gezeigt, dass jeder solche Prozess durch eine bestimmte spektral positive unendlich teilbare Verteilung eindeutig bestimmt ist. Umgekehrt wird nachgewiesen, dass zu jeder solchen unendlich teilbaren Verteilung ein zugehöriger Verzweigungsprozess konstruiert werden kann. Mit Hilfe der allgemeinen Theorie Markovscher Operatorhalbgruppen wird sichergestellt, dass jeder mehrdimensionale kontinuierliche Verzweigungsprozess eine Version mit Pfaden im Raum der cadlag-Funktionen besitzt. Ferner kann die (funktionale) schwache Konvergenz der Prozesse auf die vage Konvergenz der zugehörigen Charakterisierungen zurückgeführt werden. Hieraus folgen allgemeine Approximations- und Konvergenzsätze für die betrachtete Klasse von Prozessen. Diese allgemeinen Resultate werden auf die Unterklasse der sich verzweigenden Diffusionen angewendet. Es wird gezeigt, dass für diese Prozesse stets eine Version mit stetigen Pfaden existiert. Schließlich wird die allgemeinste Form der Fellerschen Diffusionsapproximation für mehrtypige Galton-Watson-Prozesse bewiesen.
Resumo:
This doctoral thesis unfolds into a collection of three distinct articles that share an interest in supply firms, or “peripheral firms”. The three studies offer a novel theoretical perspective that I call the peripheral view of manufacturing networks. Building on the relational view literature, this new perspective identifies the supplier-based theoretical standpoint to analyze and explain the antecedents of relational rents in manufacturing networks. The first article, the namesake of the dissertation, is a theoretical contribution that explains the foundations of the “peripheral view of manufacturing networks”. The second article “Framing The Strategic Peripheries: A Novel Typology of Suppliers” is an empirical study with the aim to offer an interpretation of peripheries’ characteristics and dynamics. The third article, “What is Behind Absorptive Capacity? Dispelling the Opacity of R&D” presents an example of general theory development by using data from peripheral firms.
Resumo:
Precision measurements of phenomena related to fermion mixing require the inclusion of higher order corrections in the calculation of corresponding theoretical predictions. For this, a complete renormalization scheme for models that allow for fermion mixing is highly required. The correct treatment of unstable particles makes this task difficult and yet, no satisfactory and general solution can be found in the literature. In the present work, we study the renormalization of the fermion Lagrange density with Dirac and Majorana particles in models that involve mixing. The first part of the thesis provides a general renormalization prescription for the Lagrangian, while the second one is an application to specific models. In a general framework, using the on-shell renormalization scheme, we identify the physical mass and the decay width of a fermion from its full propagator. The so-called wave function renormalization constants are determined such that the subtracted propagator is diagonal on-shell. As a consequence of absorptive parts in the self-energy, the constants that are supposed to renormalize the incoming fermion and the outgoing antifermion are different from the ones that should renormalize the outgoing fermion and the incoming antifermion and not related by hermiticity, as desired. Instead of defining field renormalization constants identical to the wave function renormalization ones, we differentiate the two by a set of finite constants. Using the additional freedom offered by this finite difference, we investigate the possibility of defining field renormalization constants related by hermiticity. We show that for Dirac fermions, unless the model has very special features, the hermiticity condition leads to ill-defined matrix elements due to self-energy corrections of external legs. In the case of Majorana fermions, the constraints for the model are less restrictive. Here one might have a better chance to define field renormalization constants related by hermiticity. After analysing the complete renormalized Lagrangian in a general theory including vector and scalar bosons with arbitrary renormalizable interactions, we consider two specific models: quark mixing in the electroweak Standard Model and mixing of Majorana neutrinos in the seesaw mechanism. A counter term for fermion mixing matrices can not be fixed by only taking into account self-energy corrections or fermion field renormalization constants. The presence of unstable particles in the theory can lead to a non-unitary renormalized mixing matrix or to a gauge parameter dependence in its counter term. Therefore, we propose to determine the mixing matrix counter term by fixing the complete correction terms for a physical process to experimental measurements. As an example, we calculate the decay rate of a top quark and of a heavy neutrino. We provide in each of the chosen models sample calculations that can be easily extended to other theories.
Resumo:
Der niederländische Astronom Willem de Sitter ist bekannt für seine inzwischen berühmte Kontroverse mit Einstein von 1916 bis 1918, worin die relativistische Kosmologie begründet wurde. In diesem Kontext wird sein Name mit dem von ihm geschaffenen kosmologischen Modell verbunden, welches er als Gegenbeispiel zu Einsteins physikalischer Intuition schuf. Obwohl diese Debatte schon in wissenschaftshistorischen Arbeiten analysiert wurde, hat de Sitters Rolle in der Rezeption und dem Verbreiten der allgemeinen Relativitätstheorie bislang in der Hauptrichtung der Einstein-Studien noch nicht die ihr zustehende Aufmerksamkeit erhalten. Die vorliegende Untersuchung zielt darauf ab, seine zentrale Wichtigkeit für die Forschung zur ART innerhalb der Leidener Community aufzuzeigen. Wie Eddington war de Sitter einer der wenigen Astronomen, die sowohl hinreichende Ausbildung als auch nötige Interessen vereinten, um zum einen die spezielle und zum anderen die allgemeine Relativitätstheorie zu verfolgen. Er befasste sich zunächst 1911 mit dem Relativitätsprinzip (Einsteins erstes Postulat der SRT); zwei Jahre später fand er einen Nachweis für die Konstanz der Lichtgeschwindigkeit (Einsteins zweites Postulat). De Sitters Interesse an Gravitationstheorien reicht sogar noch weiter zurück und lässt sich bis 1908 zurückverfolgen. Überdies verfolgte er Einsteins Versuche, einen feldtheoretischen Ansatz für die Gravitation zu konstruieren, inklusive der kontroversen Einstein-Grossmann Theorie von 1913. Diese Umstände zeigen deutlich, dass de Sitters bekannteres Werk zur ART eine Konsequenz seiner vorausgegangenen Forschungen war und kein Resultat einer plötzlichen, erst 1916 einsetzenden Beschäftigung mit Einsteins Relativitätstheorie.
Resumo:
La ricerca ha ad oggetto l’analisi di clausole, contenute nei contratti del commercio internazionale, che sembrano finalizzate a fornire in anticipo una metodologia dell’interpretazione del contratto. L’elaborato pertanto analizza i profili di validità ed efficacia di singole e specifiche clausole, come le “clausole d’intero accordo”, le “clausole di non modificazione orale”, le clausole contenenti definizioni, e simili, alla luce delle regole giudiriche di derivazione eteronoma applicabili al contratto, siano esse rappresentate da una legge nazionale, da una convenzione internazionale di diritto materiale uniforme, o da fonti ulteriori c.d. di soft law, come i Principi Unidroit sui contratti del commercio internazionale. La ricerca ha pertanto rivelato che, diversamente da quanto possa apparire a prima vista, svariate tipologie di clausole analizzate non coinvolgono profili legati all'’nterpretazione del contratto, quanto piuttosto di documentazione e forma dello stesso. L’elaborato contiene infine alcune considerazioni di teoria generale del diritto.
Resumo:
Il presente lavoro analizza il ruolo ricoperto dal legislatore e dalla pubblica amministrazione rispettivamente nel delineare e attuare politiche pubbliche volte alla promozione di modelli di sviluppo economico caratterizzati da un elevato tasso di sostenibilità ambientale. A tal fine, il lavoro è suddiviso in quattro capitoli. Nel primo capitolo vengono presi in considerazione i principali elementi della teoria generale che costituiscono i piani di lettura del tema trattato. Questa prima fase della ricerca è incentrata, da un lato, sull’analisi (storico-evolutiva) del concetto di ambiente alla luce della prevalente elaborazione giuridica e, dall’altro, sulla formazione del concetto di sviluppo sostenibile, con particolare riguardo alla sua declinazione in chiave ambientale. Nella parte centrale del lavoro, costituita dal secondo e dal terzo capitolo, l’analisi è rivolta a tre settori d’indagine determinanti per l’inquadramento sistematico delle politiche pubbliche del settore: il sistema di rapporti che esiste tra i molteplici soggetti (internazionali, nazionali e locali) coinvolti nella ricerca di soluzioni alla crisi sistemica ambientale; l’individuazione e la definizione dell’insieme dei principi sostanziali che governano il sistema di tutela ambientale e che indirizzano le scelte di policy nel settore; i principali strumenti (giuridici ed economici) di protezione attualmente in vigore. Il quarto ed ultimo capitolo prende in considerazione le politiche relative alle procedure di autorizzazione alla costruzione e all’esercizio degli impianti per la produzione di energia alimentati da fonti energetiche rinnovabili, analizzate quale caso specifico che può essere assunto a paradigma del ruolo ricoperto dal legislatore e dalla pubblica amministrazione nel settore delle politiche di sviluppo sostenibile. L’analisi condotta mostra un elevato tasso di complessità del sistema istituzionale e organizzativo, a cui si aggiungono evidenti limiti di efficienza per quanto riguarda il regime amministrativo delle autorizzazioni introdotto dal legislatore nazionale.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
The possibility of violence is ubiquitous in human social relations, its forms are manifold and its causes complex. Different types of violence are interrelated but in complex ways, and they are studied within a wide range of disciplines, so that a general theory, while possible, is difficult to achieve. This paper acknowledges that violence can negate power and that all forms of social power can entail violence, proceeds on the assumption that the organisation of violence is a particular source of social power. It therefore explores the general relationships of violence to power, the significance of war as the archetype of organised violence, the relationships of other types (revolution, terrorism, genocide) to war, and the significance of civilian-combatant stratification for the understanding of all types of organised violence. It then discusses the problems of applying conceptual types in analysis, and the necessity of a historical framework for theorising violence. The paper concludes by offering such a framework in the transition from industrialised total war to global surveillance war.