788 resultados para Recursive logit
Resumo:
The goal of this study was to develop Multinomial Logit models for the mode choice behavior of immigrants, with key focuses on neighborhood effects and behavioral assimilation. The first aspect shows the relationship between social network ties and immigrants’ chosen mode of transportation, while the second aspect explores the gradual changes toward alternative mode usage with regard to immigrants’ migrating period in the United States (US). Mode choice models were developed for work, shopping, social, recreational, and other trip purposes to evaluate the impacts of various land use patterns, neighborhood typology, socioeconomic-demographic and immigrant related attributes on individuals’ travel behavior. Estimated coefficients of mode choice determinants were compared between each alternative mode (i.e., high-occupancy vehicle, public transit, and non-motorized transport) with single-occupant vehicles. The model results revealed the significant influence of neighborhood and land use variables on the usage of alternative modes among immigrants. Incorporating these indicators into the demand forecasting process will provide a better understanding of the diverse travel patterns for the unique composition of population groups in Florida.
Resumo:
Biodiversity citizen science projects are growing in number, size, and scope, and are gaining recognition as valuable data sources that build public engagement. Yet publication rates indicate that citizen science is still infrequently used as a primary tool for conservation research and the causes of this apparent disconnect have not been quantitatively evaluated. To uncover the barriers to the use of citizen science as a research tool, we surveyed professional biodiversity scientists (n = 423) and citizen science project managers (n = 125). We conducted three analyses using non-parametric recursive modeling (random forest), using questions that addressed: scientists' perceptions and preferences regarding citizen science, scientists' requirements for their own data, and the actual practices of citizen science projects. For all three analyses we identified the most important factors that influence the probability of publication using citizen science data. Four general barriers emerged: a narrow awareness among scientists of citizen science projects that match their needs; the fact that not all biodiversity science is well-suited for citizen science; inconsistency in data quality across citizen science projects; and bias among scientists for certain data sources (institutions and ages/education levels of data collectors). Notably, we find limited evidence to suggest a relationship between citizen science projects that satisfy scientists' biases and data quality or probability of publication. These results illuminate the need for greater visibility of citizen science practices with respect to the requirements of biodiversity science and show that addressing bias among scientists could improve application of citizen science in conservation.
Resumo:
In this work, we have the purpose of reminding the math teacher of High School the recursive process so that he/she can use this tool to introduce contents, using recursion as an alternative to the teaching of mathematics. For this, we used questions taken from the Exame Nacional do Ensino M´edio (ENEM) [National Examination of High School] and from the Olimp´ıada Brasileira de Matem´atica das Escolas P´ublicas (OBMEP) [Brazilian Mathematics Olympiad of Public Schools], in addition to present some contents of mathematics that are defined by recursion. In this dissertation, we also showed some activities that involved the recursive reasoning and were applied in a 3rd grade class of high school in a public school in Natal / RN.
Resumo:
This thesis investigates materialization strategies of non-assumption of enunciation responsibility and inscription of an authorial voice in scientific articles produced by initial researchers in Linguistics. The specific focus lays on identify, describe and interpret: i) linguistics marks that assign enunciation responsibility; ii) the positions taken by the first speaker-enunciator (L1/E1) in relation to points of view (PoV) imputed to second enunciators (e2); and iii) the linguistic marks that assign the formulation of themselves' PoV. As a practical deployment, it is proposed to discuss how to teach taking into account text discursive strategies regarding to enunciation responsibility and also authorship in academic and scientific texts. Our research corpus is formed by eight scientific essays and they were selected in a renamed Linguistics scientific magazine which is high evaluated by Qualis/CAPES (Brazil Science Agency). The methodology follows the assumptions of a qualitative research, and an it has such an interpretative basis, even though it takes support in a quantitative approach, too. Theoretically, we based this research on Textual Analysis of Speech and linguistics theories about linguistic enunciation area. The results show two kinds of movements in PoV management: imputation and responsibility. In imputation contexts, the most recursive linguistic marks were reported speech, indirect speech, reported speech with “that”, modalization in reported speech (in enunciation with “according to”, “in agreement with”, “for”), beyond that we see certain points of non-coincidences of speech, specifically the non-coincidence of the speech itself. The way those linguistic marks occur in the text point out three kinds of enunciation positions that are assumed by L1/E1 in relation to PoV of e2: agreement, disagreement and a pseudo neutrality. It was clearly recursive the imputation followed by agreement (explicit or not), this perspective puts other’s voices to defend a speech assumed like own authorship. In speech responsibility contexts, we observed such a formulation of inner PoV that results from theoretical findings undertaken by novice researchers (revealing how he/she interpreted concepts of the theory) or arising from their research data, allowing them to express with more autonomy and without reporting to speeches from e2. Based on those data, we can say that, in text by initial researchers, the authorship is strongly built upon PoV and also dependent from others' words (theory and the scholars quoted there), taking into account that many contexts in which we can observe agreement position, PoV formulations with words taken from e2 and assumed as own words by syntactic integration, the comments about what the other says, the absence of explanations and additions, as well as a data analysis that could show agreement with the theory used to support the work. These results allow us to visualize how initial researcher dialogs with the theoretical enunciation sources he or she takes as support and how he/she displays the status of a subject doing a research and positioning himself/herself as a researcher/author in the scientific field. In assuming the reported speech, when quoting, as a resource that allows the enunciation responsibility and also when doing evidence to the positions of speaker-enunciator in relation do reported PoV, this suggests to a textual-discursive treatment of quoting in academic and scientific text, in a context of teaching that gives attention to the development of communication skills of initial researcher and that can contribute to insert and interact students in the scientific field.
Resumo:
This thesis investigates materialization strategies of non-assumption of enunciation responsibility and inscription of an authorial voice in scientific articles produced by initial researchers in Linguistics. The specific focus lays on identify, describe and interpret: i) linguistics marks that assign enunciation responsibility; ii) the positions taken by the first speaker-enunciator (L1/E1) in relation to points of view (PoV) imputed to second enunciators (e2); and iii) the linguistic marks that assign the formulation of themselves' PoV. As a practical deployment, it is proposed to discuss how to teach taking into account text discursive strategies regarding to enunciation responsibility and also authorship in academic and scientific texts. Our research corpus is formed by eight scientific essays and they were selected in a renamed Linguistics scientific magazine which is high evaluated by Qualis/CAPES (Brazil Science Agency). The methodology follows the assumptions of a qualitative research, and an it has such an interpretative basis, even though it takes support in a quantitative approach, too. Theoretically, we based this research on Textual Analysis of Speech and linguistics theories about linguistic enunciation area. The results show two kinds of movements in PoV management: imputation and responsibility. In imputation contexts, the most recursive linguistic marks were reported speech, indirect speech, reported speech with “that”, modalization in reported speech (in enunciation with “according to”, “in agreement with”, “for”), beyond that we see certain points of non-coincidences of speech, specifically the non-coincidence of the speech itself. The way those linguistic marks occur in the text point out three kinds of enunciation positions that are assumed by L1/E1 in relation to PoV of e2: agreement, disagreement and a pseudo neutrality. It was clearly recursive the imputation followed by agreement (explicit or not), this perspective puts other’s voices to defend a speech assumed like own authorship. In speech responsibility contexts, we observed such a formulation of inner PoV that results from theoretical findings undertaken by novice researchers (revealing how he/she interpreted concepts of the theory) or arising from their research data, allowing them to express with more autonomy and without reporting to speeches from e2. Based on those data, we can say that, in text by initial researchers, the authorship is strongly built upon PoV and also dependent from others' words (theory and the scholars quoted there), taking into account that many contexts in which we can observe agreement position, PoV formulations with words taken from e2 and assumed as own words by syntactic integration, the comments about what the other says, the absence of explanations and additions, as well as a data analysis that could show agreement with the theory used to support the work. These results allow us to visualize how initial researcher dialogs with the theoretical enunciation sources he or she takes as support and how he/she displays the status of a subject doing a research and positioning himself/herself as a researcher/author in the scientific field. In assuming the reported speech, when quoting, as a resource that allows the enunciation responsibility and also when doing evidence to the positions of speaker-enunciator in relation do reported PoV, this suggests to a textual-discursive treatment of quoting in academic and scientific text, in a context of teaching that gives attention to the development of communication skills of initial researcher and that can contribute to insert and interact students in the scientific field.
Resumo:
The objective of this paper is to analyse the effects of international R&D cooperation on firms’ economic performance. Our approach, based on a complete data set with information about Spanish participants in research joint ventures supported by the EU Framework Programme during the period 1995-2005, establishes a recursive model structure to capture the relationship between R&D cooperation, knowledge generation and economic results, which are measured by labour productivity. In the analysis we take into account that the participation in this specific type of cooperative projects implies a selection process that includes both the self-selection by participants to join the consortia and the selection of projects by the European Commission to award the public aid. Empirical analysis has confirmed that: (1) R&D co-operation has a positive impact on the technological capacity of firms, captured through intan-gible fixed assets and (2) the technological capacity of firms is positively related to their productivity.
Resumo:
The objective of this paper is to analyse the effects of international R&D cooperation on firms’ economic performance. Our approach, based on a complete data set with information about Spanish participants in research joint ventures supported by the EU Framework Programme during the period 1995-2005, establishes a recursive model structure to capture the relationship between R&D cooperation, knowledge generation and economic results, which are measured by labour productivity. In the analysis we take into account that the participation in this specific type of cooperative projects implies a selection process that includes both the self-selection by participants to join the consortia and the selection of projects by the European Commission to award the public aid. Empirical analysis has confirmed that: (1) R&D co-operation has a positive impact on the technological capacity of firms, captured through intan-gible fixed assets and (2) the technological capacity of firms is positively related to their productivity.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
This work examines independence in the Canadian justice system using an approach adapted from new legal realist scholarship called ‘dynamic realism’. This approach proposes that issues in law must be considered in relation to their recursive and simultaneous development with historic, social and political events. Such events describe ‘law in action’ and more holistically demonstrate principles like independence, rule of law and access to justice. My dynamic realist analysis of independence in the justice system employs a range methodological tools and approaches from the social sciences, including: historical and historiographical study; public administrative; policy and institutional analysis; an empirical component; as well as constitutional, statutory interpretation and jurisprudential analysis. In my view, principles like independence represent aspirational ideals in law which can be better understood by examining how they manifest in legal culture and in the legal system. This examination focuses on the principle and practice of independence for both lawyers and judges in the justice system, but highlights the independence of the Bar. It considers the inter-relation between lawyer independence and the ongoing refinement of judicial independence in Canadian law. It also considers both independence of the Bar and the Judiciary in the context of the administration of justice, and practically illustrates the interaction between these principles through a case study of a specific aspect of the court system. This work also focuses on recent developments in the principle of Bar independence and its relation to an emerging school of professionalism scholarship in Canada. The work concludes by describing the principle of independence as both conditional and dynamic, but rooted in a unitary concept for both lawyers and judges. In short, independence can be defined as impartiality, neutrality and autonomy of legal decision-makers in the justice system to apply, protect and improve the law for what has become its primary normative purpose: facilitating access to justice. While both independence of the Bar and the Judiciary are required to support access to independent courts, some recent developments suggest the practical interactions between independence and access need to be the subject of further research, to better account for both the principles and the practicalities of the Canadian justice system.
Resumo:
In Marxist frameworks “distributive justice” depends on extracting value through a centralized state. Many new social movements—peer to peer economy, maker activism, community agriculture, queer ecology, etc.—take the opposite approach, keeping value in its unalienated form and allowing it to freely circulate from the bottom up. Unlike Marxism, there is no general theory for bottom-up, unalienated value circulation. This paper examines the concept of “generative justice” through an historical contrast between Marx’s writings and the indigenous cultures that he drew upon. Marx erroneously concluded that while indigenous cultures had unalienated forms of production, only centralized value extraction could allow the productivity needed for a high quality of life. To the contrary, indigenous cultures now provide a robust model for the “gift economy” that underpins open source technological production, agroecology, and restorative approaches to civil rights. Expanding Marx’s concept of unalienated labor value to include unalienated ecological (nonhuman) value, as well as the domain of freedom in speech, sexual orientation, spirituality and other forms of “expressive” value, we arrive at an historically informed perspective for generative justice.
Resumo:
Ongoing developments in laser-driven ion acceleration warrant appropriate modifications to the standard Thomson Parabola Spectrometer (TPS) arrangement in order to match the diagnostic requirements associated to the particular and distinctive properties of laser-accelerated beams. Here we present an overview of recent developments by our group of the TPS diagnostic aimed to enhance the capability of diagnosing multi-species high-energy ion beams. In order to facilitate discrimination between ions with same Z / A , a recursive differential filtering technique was implemented at the TPS detector in order to allow only one of the overlapping ion species to reach the detector, across the entire energy range detectable by the TPS. In order to mitigate the issue of overlapping ion traces towards the higher energy part of the spectrum, an extended, trapezoidal electric plates design was envisaged, followed by its experimental demonstration. The design allows achieving high energy-resolution at high energies without sacrificing the lower energy part of the spectrum. Finally, a novel multi-pinhole TPS design is discussed, that would allow angularly resolved, complete spectral characterization of the high-energy, multi-species ion beams.
Resumo:
This paper documents the design and results of a study on tourists’ decision-making about destinations in Sweden. For this purpose, secondary data, available from surveys were used to identify which type of individual has the highest probability to revisit a destination and what are influencing factors to do so. A binary logit model is applied. The results show that very important influencing factors are the length of stay as well as the origin of the individual. These results could be useful for a marketing organization as well as for policy, to develop strategies to attract the most profitable tourism segment. Therefore, it can also be a great support for sustainable tourism development, where the main focus does not has priority on increasing number of tourists.
Resumo:
[EN]This grade project involves the study, design, implementation and test of an signature identification system using neural networks. Recurrent neural networks,also known as recursive neural networks, show a architectonic configuration that able output signals to be fed back to the same, or previous neurons. This feature can be used, as in this project, to build a system especialized on temporal pattern recognition, given that signatures can be seen as sequence of points in time.
Resumo:
This study aimed to understand the relations inside the organizational Structuring of the shrimp Field - the shrimp agribusiness placed in Rio Grande do Norte State and the strategies adopted by its players. In order to achieve that, semi-structured interviews were conducted with samples of various organizations that act in the field, like cooperatives, associations, enterprises of different links in the chain, universities and state agencies. The interviews built up a large collection of secondary data. As expected, it was found that Field and strategies are related in a recursive way: the configuration of the field, a result from his own biography, has decisively influenced the strategies adopted by its actors, who, as evolved, eventually caused further changes in the Field and outlines the plot of this area of interaction. It was found, for example, that after thirty-five years of its genesis, the Field of shrimp RN still has a low level of institutionalization, which helps to understand the difficulty of its actors in establish strategies based on partnerships and cooperation; Those actions are so necessary to alleviate the effects of the crisis that devastated the industry since 2004. It was noticed, however, that this level of institutionalization is a result, beside other factors, the very strategies that field actors are embracing along its trajectory. Thus, this study hopes to have contributed both to the necessary revival of the agency to institutional phenomenon, cited by Oliver (1991), and to meet the need for more contextualized approaches to organizational strategies (MINTZBERG, 1987; CLEGG, 2004; WHITTINGTON, 2004; 2006; SARAIVA and CARRIERI, 2007). It is an exploratory study that needs further investigation in order to get deep in this research. In this sense, others methodologies and theoretical perspectives need to be used, especially those relating to the seizure of the disputes and discursive aspects of power, as salient in the field investigated. Moreover, in terms of "practical actions", it is suggested that, as soon as possible, the main actors of the field (cooperatives, companies, and state entities in class) can be able of agglutinate efforts to support the shrimp field in RN State and make sustainable actions, which can promote the development of activity in a global view. On the apse of shrimp activities everybody wanted to be the "father of the child," Now, someone has to "stay in the goal."