858 resultados para Robust Probabilistic Model, Dyslexic Users, Rewriting, Question-Answering
Resumo:
Este trabalho procura dar uma resposta à seguinte pergunta de pesquisa: quais variáveis influenciam o comportamento colecionar a partir da perspectiva do consumidor? Para tanto, faz uso de métodos complementares: uso de análise de conteúdo e de modelagem de equações estruturais. As seguintes variáveis foram identificadas: ideal temático, envolvimento, desejo, prazer, permanência, ocupação do espaço e exposição, conhecimento e autoridade. O construto ideal temático foi desenvolvido e escalas para medi-lo e às variáveis permanência, ocupação do espaço e exposição, conhecimento, autoridade e colecionar também foram desenvolvidas. Os resultados sugerem que o modelo estrutural de colecionar aqui desenvolvido é consistente, confiável e válido pelos diversos critérios de análise empregados (índices de adequação do modelo; verificação das hipóteses do modelo; coeficientes de determinação; efeitos diretos, indiretos e totais dos caminhos do modelo e análise de modelos alternativos) e que, portanto, representa uma construção cabível do fenômeno colecionar na área de comportamento do consumidor e que o colecionar é uma forma de estender o eu do colecionador, de uma forma diferenciada, especial. Neste estudo esse fato foi ligado à influência do ideal temático no colecionar.
Resumo:
This thesis develops and evaluates a business model for connected full electric vehicles (FEV) for the European market. Despite a promoting political environment, various barriers have thus far prevented the FEV from becoming a mass-market vehicle. Besides cost, the most noteworthy of these barriers is represented by range anxiety, a product of FEVs’ limited range, lacking availability of charging infrastructure, and long recharging times. Connected FEVs, which maintain a constant connection to the surrounding infrastructure, appear to be a promising element to overcome drivers’ range anxiety. Yet their successful application requires a well functioning FEV ecosystem which can only be created through the collaboration of various stakeholders such as original equipment manufacturers (OEM), first tier suppliers (FTS), charging infrastructure and service providers (CISP), utilities, communication enablers, and governments. This thesis explores and evaluates how a business model, jointly created by these stakeholders, could look like, i.e. how stakeholders could collaborate in the design of products, services, infrastructure, and advanced mobility management, to meet drivers with a sensible value proposition that is at least equivalent to that of internal combustion engine (ICE) cars. It suggests that this value proposition will be an end-2-end package provided by CISPs or OEMs that comprises mobility packages (incl. pay per mile plans, battery leasing, charging and battery swapping (BS) infrastructure) and FEVs equipped with an on-board unit (OBU) combined with additional services targeted at range anxiety reduction. From a theoretical point of view the thesis answers the question which business model framework is suitable for the development of a holistic, i.e. all stakeholder-comprising business model for connected FEVs and defines such a business model. In doing so the thesis provides the first comprehensive business model related research findings on connected FEVs, as prior works focused on the much less complex scenario featuring only “offline” FEVs.
Resumo:
A crescente utilização de recursos de informática e comunicação nas empresas, visando modernização, agilidade, redução de custos e outros, tem trazido diversos benefícios, mas tem também se tornado um enorme problema para o planeta. A quantidade de lixo eletrônico (e-waste) gerada pelos equipamentos de informática e comunicação tem dobrado a cada cinco anos, se tornando um dos principais focos de atenção nos últimos anos. O volume de lixo eletrônico gerado pelo descarte de equipamentos de informática e comunicação (TIC), já ultrapassa a marca de 50 milhões de toneladas por ano, o que equivale a oito vezes a produção total de resíduos da cidade de São Paulo. O mercado brasileiro total de eletrônicos é considerado o quinto do mundo, depois da China, Estados Unidos, Japão e Rússia. A produção total de lixo eletrônico no Brasil em 2011 foi de um milhão de toneladas e a parte referente a equipamentos de TIC foi estimada em 98 mil toneladas. Frente a este contexto este estudo visa contribuir para a expansão do conhecimento na gestão verde da cadeia de suprimentos (GSCM) aplicado à realidade empresarial brasileira de TIC. Especificamente pretende-se identificar quais fatores influenciam o processo de adoção e aplicação da gestão verde de TIC, em grandes empresas usuárias de TIC no Brasil, a partir dos modelos propostos por Molla (2008) e Molla e Coopers (2008). Desta forma buscou-se responder ao seguinte problema de pesquisa: quais fatores influenciam grandes empresas usuárias de tecnologia da informação e comunicação (TIC) no Brasil na adoção de conceitos de gestão de verde? Para isto, foi realizado estudo de caso em seis grandes empresas, todas lideres em seus setores, representando grandes áreas de serviços e manufatura. Como resultado final, foi proposto um novo modelo analítico, que pareceu mais adequado ao setor de serviços. O estudo também identificou que na gestão verde de TIC empresas manufatureiras tem prioridades diferentes das de serviço. Muitas vezes seus desafios operacionais são mais críticos em relação à sustentabilidade, que a gestão verde de TIC em si. Por outro lado, o estudo dos serviços prestados pelo setor público apesar dos grandes orçamentos anuais, apontou restrições quanto aos aspectos legais e deficiência de qualificação e capacitação de seus colaboradores como fatores limitantes para a implantação de programas de gestão verde mais abrangentes.
Resumo:
We study semiparametric two-step estimators which have the same structure as parametric doubly robust estimators in their second step. The key difference is that we do not impose any parametric restriction on the nuisance functions that are estimated in a first stage, but retain a fully nonparametric model instead. We call these estimators semiparametric doubly robust estimators (SDREs), and show that they possess superior theoretical and practical properties compared to generic semiparametric two-step estimators. In particular, our estimators have substantially smaller first-order bias, allow for a wider range of nonparametric first-stage estimates, rate-optimal choices of smoothing parameters and data-driven estimates thereof, and their stochastic behavior can be well-approximated by classical first-order asymptotics. SDREs exist for a wide range of parameters of interest, particularly in semiparametric missing data and causal inference models. We illustrate our method with a simulation exercise.
Resumo:
This paper discusses distribution and the historical phases of capitalism. It assumes that technical progress and growth are taking place, and, given that, its question is on the functional distribution of income between labor and capital, having as reference classical theory of distribution and Marx’s falling tendency of the rate of profit. Based on the historical experience, it, first, inverts the model, making the rate of profit as the constant variable in the long run and the wage rate, as the residuum; second, it distinguishes three types of technical progress (capital-saving, neutral and capital-using) and applies it to the history of capitalism, having the UK and France as reference. Given these three types of technical progress, it distinguishes four phases of capitalist growth, where only the second is consistent with Marx prediction. The last phase, after World War II, should be, in principle, capital-saving, consistent with growth of wages above productivity. Instead, since the 1970s wages were kept stagnant in rich countries because of, first, the fact that the Information and Communication Technology Revolution proved to be highly capital using, opening room for a new wage of substitution of capital for labor; second, the new competition coming from developing countries; third, the emergence of the technobureaucratic or professional class; and, fourth, the new power of the neoliberal class coalition associating rentier capitalists and financiers
Resumo:
Cognition is a core subject to understand how humans think and behave. In that sense, it is clear that Cognition is a great ally to Management, as the later deals with people and is very interested in how they behave, think, and make decisions. However, even though Cognition shows great promise as a field, there are still many topics to be explored and learned in this fairly new area. Kemp & Tenembaum (2008) tried to a model graph-structure problem in which, given a dataset, the best underlying structure and form would emerge from said dataset by using bayesian probabilistic inferences. This work is very interesting because it addresses a key cognition problem: learning. According to the authors, analogous insights and discoveries, understanding the relationships of elements and how they are organized, play a very important part in cognitive development. That is, this are very basic phenomena that allow learning. Human beings minds do not function as computer that uses bayesian probabilistic inferences. People seem to think differently. Thus, we present a cognitively inspired method, KittyCat, based on FARG computer models (like Copycat and Numbo), to solve the proposed problem of discovery the underlying structural-form of a dataset.
Resumo:
Atypical points in the data may result in meaningless e±cient frontiers. This follows since portfolios constructed using classical estimates may re°ect neither the usual nor the unusual days patterns. On the other hand, portfolios constructed using robust approaches are able to capture just the dynamics of the usual days, which constitute the majority of the business days. In this paper we propose an statistical model and a robust estimation procedure to obtain an e±cient frontier which would take into account the behavior of both the usual and most of the atypical days. We show, using real data and simulations, that portfolios constructed in this way require less frequent rebalancing, and may yield higher expected returns for any risk level.
Resumo:
A longstanding unresolved question is whether the one-period Kyle Model of an informed trader and a noisily informed market maker has an equilibrium that is different from the closed-form solution derived by Kyle (1985). This note advances what is known about this open problem.
Resumo:
Este estudo investiga o poder preditivo fora da amostra, um mês à frente, de um modelo baseado na regra de Taylor para previsão de taxas de câmbio. Revisamos trabalhos relevantes que concluem que modelos macroeconômicos podem explicar a taxa de câmbio de curto prazo. Também apresentamos estudos que são céticos em relação à capacidade de variáveis macroeconômicas preverem as variações cambiais. Para contribuir com o tema, este trabalho apresenta sua própria evidência através da implementação do modelo que demonstrou o melhor resultado preditivo descrito por Molodtsova e Papell (2009), o “symmetric Taylor rule model with heterogeneous coefficients, smoothing, and a constant”. Para isso, utilizamos uma amostra de 14 moedas em relação ao dólar norte-americano que permitiu a geração de previsões mensais fora da amostra de janeiro de 2000 até março de 2014. Assim como o critério adotado por Galimberti e Moura (2012), focamos em países que adotaram o regime de câmbio flutuante e metas de inflação, porém escolhemos moedas de países desenvolvidos e em desenvolvimento. Os resultados da nossa pesquisa corroboram o estudo de Rogoff e Stavrakeva (2008), ao constatar que a conclusão da previsibilidade da taxa de câmbio depende do teste estatístico adotado, sendo necessária a adoção de testes robustos e rigorosos para adequada avaliação do modelo. Após constatar não ser possível afirmar que o modelo implementado provém previsões mais precisas do que as de um passeio aleatório, avaliamos se, pelo menos, o modelo é capaz de gerar previsões “racionais”, ou “consistentes”. Para isso, usamos o arcabouço teórico e instrumental definido e implementado por Cheung e Chinn (1998) e concluímos que as previsões oriundas do modelo de regra de Taylor são “inconsistentes”. Finalmente, realizamos testes de causalidade de Granger com o intuito de verificar se os valores defasados dos retornos previstos pelo modelo estrutural explicam os valores contemporâneos observados. Apuramos que o modelo fundamental é incapaz de antecipar os retornos realizados.
Resumo:
Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.
Resumo:
We consider multistage stochastic linear optimization problems combining joint dynamic probabilistic constraints with hard constraints. We develop a method for projecting decision rules onto hard constraints of wait-and-see type. We establish the relation between the original (in nite dimensional) problem and approximating problems working with projections from di erent subclasses of decision policies. Considering the subclass of linear decision rules and a generalized linear model for the underlying stochastic process with noises that are Gaussian or truncated Gaussian, we show that the value and gradient of the objective and constraint functions of the approximating problems can be computed analytically.
Resumo:
The aim of this study was to establish the profile of the pharmacist technician responsible for community pharmacies in the city of Natal/RN, featuring personal elements, perceived their role and place of pharmaceutical care, levels of job satisfaction, type and quality of services provided in human and structural framework. To that end, we made an exploratory cross-sectional study applying a questionnaire containing open and closed questions, which was applied to pharmaceutical technicians responsible for community pharmacies in Natal/RN, from September 2010 to September 2011. The sample was established by calculating the simple random sample, with a confidence level of 95% and a significance level of 0.05. To evaluate the satisfaction level of the activities performed by pharmacists in community pharmacies was used Simple Satisfaction Scale (Likert, 1935). To assess the attitudes and perceptions of pharmacists in relation to aspects of pharmaceutical care, we used the Model Attitude toward the object (Fishbein, Ajzen, 1975). The answers were converted into data were analyzed statistically using Epi Info 3.5.2 The results showed that the strengths and weaknesses in relation to the profile of the pharmacist and their activities in community pharmacies in Natal/RN are not different in other cities in the country . The most important aspects were: 51% (n = 90) of the establishments visited, the pharmacist was absent; 46% (n = 80) did not have postgraduate and of those who are or have completed 33% (n = 51) are in the area of Clinical Analysis; 56% (n = 98) 08h for day work and 64% (n = 111) claim that this load influence its performance; 83% (n = 146) receive as salary, the floor pharmacist regarding the state of Rio Grande do Norte; 44% (n = 76) are unhappy about the salary, which is the main difficulty cited; 78% (n = 136) say they are always sought by users and the receptivity of these considered good (52%, n = 91). The activities of higher satisfaction are those related to pharmaceutical care and lower the administrative. As regards attitudes and perceptions, the score was more negative to the question 'if the pharmacist feels working as a team with the doctor', in which 59% (n = 103) responded 'never'. 49% (n = 86) reported being "able" to take questions from users and 39% (n = 68) are 'dissatisfied' with respect to the structure of the practice of pharmacy to pharmaceutical care. Action is needed on the obstacles to the exercise of the pharmacist in the solution and minimize the negative and positive stimulus to
Resumo:
The Family Health Program implemented in Brazilian municipalities from 1994 represents today the most promising proposal to promote important changes in municipality`s health systems, to allow universal access to health care, comprehensiveness, equity and to promote social control, achievements provided by the health reform process and incorporated to the Unified Health System principles. However, many are the challenges imposed to the Family Health Program so that it can cause these advances. In this study, we aimed to answer the following research question: what are the results of the Family Health Program in relation to beneficiaries at small, medium and large municipalities? The hypothesis that guided this work was that the variation in levels of achievement/results (strict, impacts and effects) of the Family Health Program is related to the size of the municipalities. Therefore, our general aim was to evaluate the results of the Family Health Program in municipalities at Rio Grande do Norte, Brazil. And as specific objectives, to measure strict results, effects and impacts of the Program, from the criteria of efficiency and effectiveness on the beneficiated population, and to measure the Program`s impact on the organization of municipality`s health system. This is an impact assessment research, developed from multiple case studies with quanti-qualitative approach. The study included small municipalities (Acari and Taipu), midsize (Canguaretama and Santa Cruz) and large (Natal and Mossoró). The individuals chosen to the research were users/beneficiaries of the Program and health professionals. Data analysis was performed using descriptive statistics and content analysis compared from the Program`s logical /theoretical model. The results obtained in relation to the principles evaluated (universality, comprehensiveness and community participation) presented that municipalities show different results, although not directly related to the size, but related with characteristics of the Program`s implementation form in each municipality and the arrangements made for its operationalization. The positive effect that generated significant change in people`s lives has been linked to the increase of access and to the decrease of geographic barriers. However, to the municipal health system, regarding the changes desired by the Program, it was not observed a positive impact, but a negative impact related to the increase of barriers for the user to access other levels of the health system
Resumo:
The static and cyclic assays are common to test materials in structures.. For cycling assays to assess the fatigue behavior of the material and thereby obtain the S-N curves and these are used to construct the diagrams of living constant. However, these diagrams, when constructed with small amounts of S-N curves underestimate or overestimate the actual behavior of the composite, there is increasing need for more testing to obtain more accurate results. Therewith, , a way of reducing costs is the statistical analysis of the fatigue behavior. The aim of this research was evaluate the probabilistic fatigue behavior of composite materials. The research was conducted in three parts. The first part consists of associating the equation of probability Weilbull equations commonly used in modeling of composite materials S-N curve, namely the exponential equation and power law and their generalizations. The second part was used the results obtained by the equation which best represents the S-N curves of probability and trained a network to the modular 5% failure. In the third part, we carried out a comparative study of the results obtained using the nonlinear model by parts (PNL) with the results of a modular network architecture (MN) in the analysis of fatigue behavior. For this we used a database of ten materials obtained from the literature to assess the ability of generalization of the modular network as well as its robustness. From the results it was found that the power law of probability generalized probabilistic behavior better represents the fatigue and composites that although the generalization ability of the MN that was not robust training with 5% failure rate, but for values mean the MN showed more accurate results than the PNL model
Resumo:
The interdisciplinary nature of Astronomy makes it a field of great potential to explore various scientific concepts. However, studies show a great lack of understanding of fundamental subjects, including models that explain phenomena that mark everyday life, like the phases of the moon. Particularly in the context of distance education, learning of such models can be favored by the use of technologies of information and communication. Among other possibilities, we highlight the importance of digital materials that motivate and expand the forms of representation available about phenomena and models. It is also important, however, that these materials promote the explicitation of student's conceptions, as well as interaction with the most central aspects of the astronomical model for the phenomenon. In this dissertation we present a hypermedia module aimed at learning about the phases of the moon, drawn from an investigation on the difficulties with the subject during an Astronomy course for teaching training at undergraduate level at UFRN. The tests of three semesters of course were analyzed, taking into account also the alternative conceptions reported in the literature in astronomy education. The product makes use of small texts, questions, images and interactive animations. Emphasizes questions about the illumination of the Moon and other bodies, and their relationship to the sun, the perception from different angles of objects illuminated by a single source, the cause of the alternation between day and night, the identification of Moon's orbit around the Earth and the occurrence of the phases as a result of the position of observing it, and the perception of time involved in the phenomenon. The module incorporated considerations obtained from interviews with students in two poles where its given presential support for students of the course, and subjects from different pedagogical contexts. The final form of the material was used in a real situation of learning, as supplementary material for the final test of the discipline. The material was analyzed by 7 students and 4 tutors, among 56 users, in the period in question. Most students considered that the so called "Lunar Module" made a difference in their learning, the animations were considered the most prominent aspect, the images were indicated as stimulating and enlightening, and the text informative and enjoyable. The analysis of learning of these students, observing their responses to issues raised at the last evaluation, suggested gains in key aspects relating to the understanding of the phases, but also indicates more persistent difficulties. The work leads us to conclude that it is important to seek contributions for the training of science teachers making use of new technologies, with attention to the treatment of computer as a complementary resource. The interviews that preceded the use of the module, and the way student has sought the module if with questions and/or previous conflicts - established great difference in the effective contribution of the material, indicating that it should be used with the mediation of teacher or tutor, or via strategies that cause interactions between students. It is desirable that these interactions are associated with the recovery of memories of the subjects about previous observations and models, as well as the stimulus to new observations of phenomena