469 resultados para Winner’s Curse


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Após três décadas do surgimento da aids no mundo, ainda nos encontramos emudecidos diante de uma doença temida em ser pronunciada pelo peso da morte que seu nome traz. A doença, que surgiu como uma maldição a minorias sexuais, profissionais do sexo e usuários de drogas, imprimiu em seus corpos e em suas almas o símbolo da morte imediata, vergonhosa e dolorida. Em todo esse percurso, teve seu corpo viral “radiografado” pela ciência médica, sem que pudesse ser retirada da matematização de que aids e igual à morte, o que a reduziria à condição de “uma doença”. Esta dissertação propõe-se a apresentar um panorama da aids, apontando seu surgimento, a identificação do vírus e as metáforas utilizadas para o posicionamento da enfermidade no locus biológico e moral do mundo moderno. Utilizaram-se metáforas de doenças estigmatizantes, que recaem sobre lugares de cuidados de saúde, para apresentar o Hospital Universitário João de Barros Barreto como um espaço marcado pelo estigma da tuberculose e da aids, o que lhe confere a imagem de horror perante a população paraense. Neste espaço, foram atendidas pessoas cujas relações de desamparo e dependência a um objeto/ato externo ao ego pode tê-las levados à exposição ao vírus HIV. Destes, foi destacado um caso clínico para estudo deste fenômeno. Como tentativa de entendimento da relação com o objeto de dependência, foram buscados, principalmente, os conceitos winnicottianos de “ambiente maternante suficientemente bom”, “objeto transicional”; além de “adicção”, trabalhado por Joyce McDougall. Com o trabalho realizado, pôde-se observar que o vazio relatado em psicoterapia, muito presentes nos pacientes, é circunscrito por relações adictivas, como um modo de defesa que permite tomar o objeto como substituto materno externo vital. Para a escuta destas dependências, foi utilizada a perspectiva da clínica winnicottiana, com a possibilidade de reposicionar o paciente, para modificar sua realidade interna e externa a partir de um agir criativo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pós-graduação em Artes - IA

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Micro and Small Enterprises are in a special group of companies it given potential for development, employability, and integration into society. Although in Brazil they have high mortality rates, resulted to creation of the Statute of Micro and Small Enterprise in order to encourage it through tax benefits and advantages in public procurement processes. However, the other side of the debate, the public procurement process is the moment when the Public Sector relates to the Private Sector to materialize works, services and shopping. These procurement processes have itself weaknesses, such as excessive bureaucracy, delays and corruption, leading us to inquire: is a good idea promotes the development of micro and small enterprises through weaknesses procurement process? This discussion using theoretical framework, and an analysis of two case studies where the winners of procurement were two enterprises benefited from the Statute of Micro and Small Enterprises

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The article presents São Paulo State University Corporate Education Program – UNESPCorp – whose target is to evolve the staff of institution in order of their professional improvement, using distance education technologies (D-learning). The UNESPCorp pilot project started with the Improvement in Bidding and Public Employment curse. The curse was taught on the second semester of 2012 and received 130 employees representing all UNESP campuses. The text was divided in two parts, the first part recovers historical context which introduces the Corporate University. The second part presents challenges and advances represented for the first version of course, even as its pedagogical structure of operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of study was to verify what tactical behaviors differ winner from loser teams in small sized games in youth soccer players. The tactical performance of winners and losers teams was compared through of System of Tactical Assessment in Soccer (FUT-SAT). Three thousand eight hundred and eight tactical actions were carried out by seventy-two youth soccer players from the under-11 (n=12), under-13 (n=12), under-15 (n=30) and under-17 (n=18) categories of Portuguese teams. Twenty four teams were composed to analyze, each team carried out one match (12 match analyzed). Each team was composed by three line players with goalkeeper which was not analyzed in test. The statistical analysis was performed thought the SPSS 17.0 software for Windows. A descriptive analyze and the KolmogorovSmirnov, Chi-square, Mann-Whitney U and T-Test tests to the independent samples was carried out, and the Cohen’s Kappa test to determining of the sample reability. Considering 76 variables analyzed, 12 presented significant differences among players of winners and loses teams. The players of the winner teams presented superiority in Macro-Category Action, in the Defensive tactical principles, Equilibrium and Defensive Unity. In the category Local of action in the Field, the winner teams presented superiority in the number of defensive actions performed in the defensive midfielder. In the category Result of the action, the variables Keep without the ball, Retrieving the ball possession and Shots on goal the winner teams presented higher results, while in variables Losing the ball possession and Suffering shot on goal the loser teams presented higher results. In the Performance Macro-Category, the superiority of the winners was showed by better Tactical Performance Index (TPI) in the Penetration principles, Offensive Unity, and in the Offensive phase of the game. The results demonstrated that the winner players presented superiority in both stages of the game.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: A current challenge in gene annotation is to define the gene function in the context of the network of relationships instead of using single genes. The inference of gene networks (GNs) has emerged as an approach to better understand the biology of the system and to study how several components of this network interact with each other and keep their functions stable. However, in general there is no sufficient data to accurately recover the GNs from their expression levels leading to the curse of dimensionality, in which the number of variables is higher than samples. One way to mitigate this problem is to integrate biological data instead of using only the expression profiles in the inference process. Nowadays, the use of several biological information in inference methods had a significant increase in order to better recover the connections between genes and reduce the false positives. What makes this strategy so interesting is the possibility of confirming the known connections through the included biological data, and the possibility of discovering new relationships between genes when observed the expression data. Although several works in data integration have increased the performance of the network inference methods, the real contribution of adding each type of biological information in the obtained improvement is not clear. Methods: We propose a methodology to include biological information into an inference algorithm in order to assess its prediction gain by using biological information and expression profile together. We also evaluated and compared the gain of adding four types of biological information: (a) protein-protein interaction, (b) Rosetta stone fusion proteins, (c) KEGG and (d) KEGG+GO. Results and conclusions: This work presents a first comparison of the gain in the use of prior biological information in the inference of GNs by considering the eukaryote (P. falciparum) organism. Our results indicates that information based on direct interaction can produce a higher improvement in the gain than data about a less specific relationship as GO or KEGG. Also, as expected, the results show that the use of biological information is a very important approach for the improvement of the inference. We also compared the gain in the inference of the global network and only the hubs. The results indicates that the use of biological information can improve the identification of the most connected proteins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] La presente obra recoge una nutrida relación de preguntas sobre diversos aspectos de la Geografía que los autores se han planteado a lo largo de su ejercicio docente y cuyo planteamiento refleja un estilo reflexivo, muy personal si se quiere, de transmitir las Ciencias de la Tierra. Dos motivos alentaron, desde un primer momento, la redacción de estas páginas; por una parte, la creciente sensibilidad por el medio ambiente, por conocer y comprender los procesos que modifican la geografía terrestre y por otra parte, la excesiva carga descriptiva de la que sigue adoleciendo la ciencia geográfica. Por esta razón, el objetivo propuesto al redactar el libro ha sido racionalizar las principales situaciones descritas por la Geografía Física para abordar su explicación con un enfoque netamente reflexivo. El lector tipo al que se dirige esta obra sería el estudiante de primer año de universidad o último de bachillerato, que curse materias relacionadas con las Ciencias de la Tierra y del Medio Ambiente. Esta circunstancia ha condicionado la metodología seguida y la estructura dada a la obra.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il seguente lavoro di tesi tratta l'argomento delle aste in modo tecnico, ovvero cerca di descriverne i modelli e le caratteristiche principali, spesso ignorate dagli stessi fruitori. Nel capitolo 1 si introduce brevemente il concetto di asta, descrivendone i principali elementi costitutivi. Si ripercorrono poi le origini di questa procedura ed alcuni suoi utilizzi. Nel capitolo 2 si presentano inizialmente le principali tipologie di aste conosciute e si accenna al processo di valutazione dell'oggetto d'asta. Si introduce poi il concetto di Private Value, analizzandolo per ogni tipo di asta e confrontando queste sotto l'aspetto della rendita. Si enuncia in seguito un principio fondamentale, quale quello dell'equivalenza delle rendite, rilassandone alcuni assunti basilari. Infine si passa al concetto di valori interdipendenti all'interno delle aste, valutandone equilibri, rendite ed efficienza, accennando nel contempo al problema denominato Winner's curse. Nel capitolo 3 si parla dei meccanismi di asta online, ponendo l'attenzione su un loro aspetto importante, ovvero la veridicità, ed analizzandoli attraverso l'analisi del caso peggiore e del caso medio in alcuni esempi costruiti ad-hoc. Nel capitolo 4 si descrivono in particolare le sponsored search auctions, narrandone inizialmente la storia, e successivamente passando all'analisi di equilibri, rendite ed efficienza; si presenta, infine, un modello di tali aste mettendone in rapporto la computabilità con quella dei meccanismi offline conosciuti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Which changes has the official dismantling of apartheid brought in the novel writing of the two South African Nobel Prize winners? Focussing on Gordimer’s Get a life and Coetzee’s Elizabeth Costello, this study tries to reflect that main question. Theoretically, some elements of the cultural studies such as the popular, the liberalism and the fiction as a political work, help me develop my work. Narratology and structuralism help me for the literary study of the narratives, which constitute the core of my reflexion. From my study it appears that the post-apartheid novel as written by Coetzee and Gordimer has the same ideological orientation as their writing during the apartheid era. They write from the dominant perspective and dominant group. This situation challenges as Gordimer’s as Coetzee’s popularity on the ground of South African literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social experience influences the outcome of conflicts such that winners are more likely to win again and losers will more likely lose again, even against different opponents. Although winner and loser effects prevail throughout the animal kingdom and crucially influence social structures, the ultimate and proximate causes for their existence remain unknown. We propose here that two hypotheses are particularly important among the potential adaptive explanations: the 'social-cue hypothesis', which assumes that victory and defeat leave traces that affect the decisions of subsequent opponents; and the 'self-assessment hypothesis', which assumes that winners and losers gain information about their own relative fighting ability in the population. We discuss potential methodologies for experimental tests of the adaptive nature of winner and loser effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In biostatistical applications interest often focuses on the estimation of the distribution of a time-until-event variable T. If one observes whether or not T exceeds an observed monitoring time at a random number of monitoring times, then the data structure is called interval censored data. We extend this data structure by allowing the presence of a possibly time-dependent covariate process that is observed until end of follow up. If one only assumes that the censoring mechanism satisfies coarsening at random, then, by the curve of dimensionality, typically no regular estimators will exist. To fight the curse of dimensionality we follow the approach of Robins and Rotnitzky (1992) by modeling parameters of the censoring mechanism. We model the right-censoring mechanism by modeling the hazard of the follow up time, conditional on T and the covariate process. For the monitoring mechanism we avoid modeling the joint distribution of the monitoring times by only modeling a univariate hazard of the pooled monitoring times, conditional on the follow up time, T, and the covariates process, which can be estimated by treating the pooled sample of monitoring times as i.i.d. In particular, it is assumed that the monitoring times and the right-censoring times only depend on T through the observed covariate process. We introduce inverse probability of censoring weighted (IPCW) estimator of the distribution of T and of smooth functionals thereof which are guaranteed to be consistent and asymptotically normal if we have available correctly specified semiparametric models for the two hazards of the censoring process. Furthermore, given such correctly specified models for these hazards of the censoring process, we propose a one-step estimator which will improve on the IPCW estimator if we correctly specify a lower-dimensional working model for the conditional distribution of T, given the covariate process, that remains consistent and asymptotically normal if this latter working model is misspecified. It is shown that the one-step estimator is efficient if each subject is at most monitored once and the working model contains the truth. In general, it is shown that the one-step estimator optimally uses the surrogate information if the working model contains the truth. It is not optimal in using the interval information provided by the current status indicators at the monitoring times, but simulations in Peterson, van der Laan (1997) show that the efficiency loss is small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, researchers in the health and social sciences have become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of an exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Natural direct and indirect effects are of particular interest as they generally combine to produce the total effect of the exposure and therefore provide insight on the mechanism by which it operates to produce the outcome. A semiparametric theory has recently been proposed to make inferences about marginal mean natural direct and indirect effects in observational studies (Tchetgen Tchetgen and Shpitser, 2011), which delivers multiply robust locally efficient estimators of the marginal direct and indirect effects, and thus generalizes previous results for total effects to the mediation setting. In this paper we extend the new theory to handle a setting in which a parametric model for the natural direct (indirect) effect within levels of pre-exposure variables is specified and the model for the observed data likelihood is otherwise unrestricted. We show that estimation is generally not feasible in this model because of the curse of dimensionality associated with the required estimation of auxiliary conditional densities or expectations, given high-dimensional covariates. We thus consider multiply robust estimation and propose a more general model which assumes a subset but not all of several working models holds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider inference in randomized studies, in which repeatedly measured outcomes may be informatively missing due to drop out. In this setting, it is well known that full data estimands are not identified unless unverified assumptions are imposed. We assume a non-future dependence model for the drop-out mechanism and posit an exponential tilt model that links non-identifiable and identifiable distributions. This model is indexed by non-identified parameters, which are assumed to have an informative prior distribution, elicited from subject-matter experts. Under this model, full data estimands are shown to be expressed as functionals of the distribution of the observed data. To avoid the curse of dimensionality, we model the distribution of the observed data using a Bayesian shrinkage model. In a simulation study, we compare our approach to a fully parametric and a fully saturated model for the distribution of the observed data. Our methodology is motivated and applied to data from the Breast Cancer Prevention Trial.