947 resultados para Random coefficient logit (RCL) model
Resumo:
The activation-deactivation pseudo-equilibrium coefficient Qt and constant K0 (=Qt x PaT1,t = ([A1]x[Ox])/([T1]x[T])) as well as the factor of activation (PaT1,t) and rate constants of elementary steps reactions that govern the increase of Mn with conversion in controlled cationic ring-opening polymerization of oxetane (Ox) in 1,4-dioxane (1,4-D) and in tetrahydropyran (THP) (i.e. cyclic ethers which have no homopolymerizability (T)) were determined using terminal-model kinetics. We show analytically that the dynamic behavior of the two growing species (A1 and T1) competing for the same resources (Ox and T) follows a Lotka-Volterra model of predator-prey interactions. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.
Resumo:
This study was an evaluation of a Field Project Model Curriculum and its impact on achievement, attitude toward science, attitude toward the environment, self-concept, and academic self-concept with at-risk eleventh and twelfth grade students. One hundred eight students were pretested and posttested on the Piers-Harris Children's Self-Concept Scale, PHCSC (1985); the Self-Concept as a Learner Scale, SCAL (1978); the Marine Science Test, MST (1987); the Science Attitude Inventory, SAI (1970); and the Environmental Attitude Scale, EAS (1972). Using a stratified random design, three groups of students were randomly assigned according to sex and stanine level, to three treatment groups. Group one received the field project method, group two received the field study method, and group three received the field trip method. All three groups followed the marine biology course content as specified by Florida Student Performance Objectives and Frameworks. The intervention occurred for ten months with each group participating in outside-of-classroom activities on a trimonthly basis. Analysis of covariance procedures were used to determine treatment effects. F-ratios, p-levels and t-tests at p $<$.0062 (.05/8) indicated that a significant difference existed among the three treatment groups. Findings indicated that groups one and two were significantly different from group three with group one displaying significantly higher results than group two. There were no significant differences between males and females in performance on the five dependent variables. The tenets underlying environmental education are congruent with the recommendations toward the reform of science education. These include a value analysis approach, inquiry methods, and critical thinking strategies that are applied to environmental issues. ^
Resumo:
An emergency is a deviation from a planned course of events that endangers people, properties, or the environment. It can be described as an unexpected event that causes economic damage, destruction, and human suffering. When a disaster happens, Emergency Managers are expected to have a response plan to most likely disaster scenarios. Unlike earthquakes and terrorist attacks, a hurricane response plan can be activated ahead of time, since a hurricane is predicted at least five days before it makes landfall. This research looked into the logistics aspects of the problem, in an attempt to develop a hurricane relief distribution network model. We addressed the problem of how to efficiently and effectively deliver basic relief goods to victims of a hurricane disaster. Specifically, where to preposition State Staging Areas (SSA), which Points of Distributions (PODs) to activate, and the allocation of commodities to each POD. Previous research has addressed several of these issues, but not with the incorporation of the random behavior of the hurricane's intensity and path. This research presents a stochastic meta-model that deals with the location of SSAs and the allocation of commodities. The novelty of the model is that it treats the strength and path of the hurricane as stochastic processes, and models them as Discrete Markov Chains. The demand is also treated as stochastic parameter because it depends on the stochastic behavior of the hurricane. However, for the meta-model, the demand is an input that is determined using Hazards United States (HAZUS), a software developed by the Federal Emergency Management Agency (FEMA) that estimates losses due to hurricanes and floods. A solution heuristic has been developed based on simulated annealing. Since the meta-model is a multi-objective problem, the heuristic is a multi-objective simulated annealing (MOSA), in which the initial solution and the cooling rate were determined via a Design of Experiments. The experiment showed that the initial temperature (T0) is irrelevant, but temperature reduction (δ) must be very gradual. Assessment of the meta-model indicates that the Markov Chains performed as well or better than forecasts made by the National Hurricane Center (NHC). Tests of the MOSA showed that it provides solutions in an efficient manner. Thus, an illustrative example shows that the meta-model is practical.
Resumo:
An integrated surface-subsurface hydrological model of Everglades National Park (ENP) was developed using MIKE SHE and MIKE 11 modeling software. The model has a resolution of 400 meters, covers approximately 1050 square miles of ENP, includes 110 miles of drainage canals with a variety of hydraulic structures, and processes hydrological information, such as evapotranspiration, precipitation, groundwater levels, canal discharges and levels, and operational schedules. Calibration was based on time series and probability of exceedance for water levels and discharges in the years 1987 through 1997. Model verification was then completed for the period of 1998 through 2005. Parameter sensitivity in uncertainty analysis showed that the model was most sensitive to the hydraulic conductivity of the regional Surficial Aquifer System, the Manning's roughness coefficient, and the leakage coefficient, which defines the canal-subsurface interaction. The model offers an enhanced predictive capability, compared to other models currently available, to simulate the flow regime in ENP and to forecast the impact of topography, water flows, and modifying operation schedules.
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^
Resumo:
This research aimed to analyse the effect of different territorial divisions in the random fluctuation of socio-economic indicators related to social determinants of health. This is an ecological study resulting from a combination of statistical methods including individuated and aggregate data analysis, using five databases derived from the database of the Brazilian demographic census 2010: overall results of the sample by weighting area. These data were grouped into the following levels: households; weighting areas; cities; Immediate Urban Associated Regions and Intermediate Urban Associated Regions. A theoretical model related to social determinants of health was used, with the dependent variable Household with death and as independent variables: Black race; Income; Childcare and school no attendance; Illiteracy; and Low schooling. The data was analysed in a model related to social determinants of health, using Poisson regression in individual basis, multilevel Poisson regression and multiple linear regression in light of the theoretical framework of the area. It was identified a greater proportion of households with deaths among those with at least one black resident, lower-income, illiterate, who do not attend or attended school or day-care and less educated. The analysis of the adjusted model showed that most adjusted prevalence ratio was related to Income, where there is a risk value of 1.33 for households with at least one resident with lower average personal income to R$ 655,00 (Brazilian current). The multilevel analysis demonstrated that there was a context effect when the variables were subjected to the effects of areas, insofar as the random effects were significant for all models and with different prevalence rates being higher in the areas with smaller dimensions - Weighting areas with coefficient of 0.035 and Cities with coefficient of 0.024. The ecological analyses have shown that the variable Income and Low schooling presented explanatory potential for the outcome on all models, having income greater power to determine the household deaths, especially in models related to Immediate Urban Associated Regions with a standardized coefficient of -0.616 and regions intermediate urban associated regions with a standardized coefficient of -0.618. It was concluded that there was a context effect on the random fluctuation of the socioeconomic indicators related to social determinants of health. This effect was explained by the characteristics of territorial divisions and individuals who live or work there. Context effects were better identified in the areas with smaller dimensions, which are more favourable to explain phenomena related to social determinants of health, especially in studies of societies marked by social inequalities. The composition effects were better identified in the Regions of Urban Articulation, shaped through mechanisms similar to the phenomenon under study.
Resumo:
The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.
Resumo:
We study theoretically the effect of a new type of blocklike positional disorder on the effective electromagnetic properties of one-dimensional chains of resonant, high-permittivity dielectric particles, where particles are arranged into perfectly well-ordered blocks whose relative position is a random variable. This creates a finite order correlation length that mimics the situation encountered in metamaterials fabricated through self-assembled techniques, whose structures often display short-range order between near neighbors but long-range disorder, due to stacking defects. Using a spectral theory approach combined with a principal component statistical analysis, we study, in the long-wavelength regime, the evolution of the electromagnetic response when the composite filling fraction and the block size are changed. Modifications in key features of the resonant response (amplitude, width, etc.) are investigated, showing a regime transition for a filling fraction around 50%.
Resumo:
A main unsolved problem in the RNA world scenario for the origin of life is how a template-dependent RNA polymerase ribozyme emerged from short RNA oligomers generated by random polymerization of ribonucleotides (Joyce and Orgel 2006). Current estimates establish a minimum size about 165 nt long for such a ribozyme (Johnston et al. 2001), a length three to four times that of the longest RNA oligomers obtained by random polymerization on clay mineral surfaces (Huang and Ferris 2003, 2006). To overcome this gap, we have developed a stepwise model of ligation-based, modular evolution of RNA (Briones et al. 2009) whose main conceptual steps are summarized in Figure 1. This scenario has two main advantages with respect to previous hypotheses put forward for the origin of the RNA world: i) short RNA....
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.
Resumo:
This paper documents the design and results of a study on tourists’ decision-making about destinations in Sweden. For this purpose, secondary data, available from surveys were used to identify which type of individual has the highest probability to revisit a destination and what are influencing factors to do so. A binary logit model is applied. The results show that very important influencing factors are the length of stay as well as the origin of the individual. These results could be useful for a marketing organization as well as for policy, to develop strategies to attract the most profitable tourism segment. Therefore, it can also be a great support for sustainable tourism development, where the main focus does not has priority on increasing number of tourists.
Resumo:
This research is part of the field of organizational studies, focusing on organizational purchase behavior and, specifically, trust interorganizational at the purchases. This topic is current and relevant by addressing the development of good relations between buyer-supplier that increases the exchange of information, increases the length of relationship, reduces the hierarchical controls and improves performance. Furthermore, although there is a vast literature on trust, the scientific work that deal specifically at the trust interorganizational still need further research to synthesize and validate the variables that generate this phenomenon. In this sense, this investigation is to explain the antecedents of trust interorganizational by the relationship between the variable operational performance, organizational characteristics, shared values and interpersonal relationships on purchases by manufacturing industries, in order to develop a robust literature, most consensual, that includes the current sociological and economic, considering the effect of interpersonal relationships in this phenomenon. This proposal is configured in a new vision of the antecedents of interorganizational trust, described as significant quantitative from models Morgan and Hunt (1994), Doney and Cannon (1997), Zhao and Cavusgil (2006) and Nyaga, Whipple, Lynch (2011), as well as qualitative analysis of Tacconi et al. (2011). With regard to methodological aspects, the study assumes the form of a descriptive, survey type, and causal trace theoretical and empirical. As for his nature, the investigation, explicative character, has developed a quantitative approach with the use of exploratory factor analysis and structural equation modeling SEM, with the use of IBM software SPSS Amos 18.0, using the method of maximum verisimilitude, and supported by technical bootstraping. The unit of analysis was the buyer-supplier relationship, in which the object under investigation was the supplier organization in view of the purchasing company. 237 valid questionnaires were collected among key informants, using a simple random sampling developed in manufacturing industries (SIC 10-33), located in the city of Natal and in the region of Natal. The first results of descriptive analysis demonstrate the phenomenon of interorganizational trust, in which purchasing firms believe, feel secure about the supplier. This demonstration showed high levels of intensity, predominantly among the vendors that supply the company with materials that are used directly in the production process. The exploratory and confirmatory factor analysis, performed on each variable alone, generated a set of observable and unobservable variables more consistent, giving rise to a model, that needed to be further specified. This again specify model consists of trajectories was positive, with a good fit, with a composite reliability and variance extracted satisfactory, and demonstrates convergent and discriminant validity, in which the factor loadings are significant and strong explanatory power. Given the findings that reinforce the model again specify data, suggesting a high probability that this model may be more suited for the study population, the results support the explanation that interorganizational trust depends on purchases directly from interpersonal relationships, sharing value and operating performance and indirectly of personal relationships, social networks, organizational characteristics, physical and relational aspect of performance. It is concluded that this trust can be explained by a set of interactions between these three determinants, where the focus is on interpersonal relationships, with the largest path coefficient for the factor under study
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06