866 resultados para Ordinary Least Squares Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation examines the monetary models of exchange rate determination for Brazil, Canada, and two countries in the Caribbean, namely, the Dominican Republic and Jamaica. With the exception of Canada, the others adopted the floating regime during the past ten years.^ The empirical validity of four seminal models in exchange rate economics were determined. Three of these models were entirely classical (Bilson and Frenkel) or Keynesian (Dornbusch) in nature. The fourth model (Real Interest Differential Model) was a mixture of the two schools of economic theory.^ There is no clear empirical evidence of the validity of the monetary models. However, the signs of the coefficients of the nominal interest differential variable were as predicted by the Keynesian hypothesis in the case of Canada and as predicted by the Chicago theorists in the remaining countries. Moreover, in case of Brazil, due to hyperinflation, the exchange rate is heavily influenced by domestic money supply.^ I also tested the purchasing power parity (PPP) for this same set of countries. For both the monetary as well as the PPP hypothesis, I tested for co-integration and applied ordinary least squares estimation procedure. The error correction model was also used for the PPP model, to determine convergence to equilibrium.^ The validity of PPP is also questionable for my set of countries. Endogeinity among the regressors as well as the lack of proper price indices are the contributing factors. More importantly, Central Bank intervention negate rapid adjustment of price and exchange rates to their equilibrium value. However, its forecasting capability for the period 1993-1994 is superior compared to the monetary models in two of the four cases.^ I conclude that in spite of the questionable validity of these models, the monetary models give better results in the case of the "smaller" economies like the Dominican Republic and Jamaica where monetary influences swamp the other determinants of exchange rate. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, the trend towards decentralization is far-reaching. Proponents of decentralization have argued that decentralization promotes responsive and accountable local government by shortening the distance between local representatives and their constituency. However, in this paper, I focus on the countervailing effect of decentralization on the accountability mechanism, arguing that decentralization, which increases the number of actors eligible for policy making and implementation in governance as a whole, may blur lines of responsibility, thus weakening citizens’ ability to sanction government in election. By using the ordinary least squares (OLS) interaction model based on historical panel data for 78 countries in the 2002 – 2010 period, I test the hypothesis that as the number of government tiers increases, there will be a negative interaction between the number of government tiers and decentralization policies. The regression results show empirical evidence that decentralization policies, having a positive impact on governance under a relatively simple form of multilevel governance, have no more statistically significant effects as the complexity of government structure exceeds a certain degree. In particular, this paper found that the presence of intergovernmental meeting with legally binding authority have a negative impact on governance when the complexity of government structure reaches to the highest level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Protecting public health is the most legitimate use of zoning, and yet there is minimal progress in applying it to the obesity problem. Zoning could potentially be used to address both unhealthy and healthy food retailers, but lack of evidence regarding the impact of zoning and public opinion on zoning changes are barriers to implementing zoning restrictions on fast food on a larger scale. My dissertation addresses these gaps in our understanding of health zoning as a policy option for altering built, food environments.

Chapter 1 examines the relationship between food swamps and obesity and whether spatial mapping might be useful in identifying priority geographic areas for zoning interventions. I employ an instrumental variables (IV) strategy to correct for the endogeneity problems associated with food environments, namely that individuals may self-select into certain neighborhoods and may consider food availability in their decision process. I utilize highway exits as a source of exogenous variation .Using secondary data from the USDA Food Environment Atlas, ordinary least squares (OLS) and IV regression models were employed to analyze cross-sectional associations between local food environments and the prevalence of obesity. I find even after controlling for food desert effects, food swamps have a positive, statistically significant effect on adult obesity rates.

Chapter 2 applies theories of message framing and prospect theory to the emerging discussion around health zoning policies targeting food environments and to explore public opinion toward a list of potential zoning restrictions on fast-food restaurants (beyond moratoriums on new establishments). In order to explore causality, I employ an online survey experiment manipulating exposure to vignettes with different message frames about health zoning restrictions with two national samples of adult Americans age 18 and over (N1=2,768 and N2=3,236). The second sample oversamples Black Americans (N=1,000) and individuals with high school as their highest level of education. Respondents were randomly assigned to one of six conditions where they were primed with different message frames about the benefits of zoning restrictions on fast food retailers. Participants were then asked to indicate their support for six zoning policies on a Likert scale. Subjects also answered questions about their food store access, eating behaviors, health status and perceptions of food stores by type.

I find that a message frame about Nutrition and increasing Equity in the food system was particularly effective at increasing support for health zoning policies targeting fast food outlets across policy categories (Conditional, Youth-related, Performance and Incentive) and across racial groups. This finding is consistent with an influential environmental justice scholar’s description of “injustice frames” as effective in mobilizing supporters around environmental issues (Taylor 2000). I extend this rationale to food environment obesity prevention efforts and identify Nutrition combined with Equity frames as an arguably universal campaign strategy for bolstering public support of zoning restrictions on fast food retailers.

Bridging my findings from both Chapters 1 and 2, using food swamps as a spatial metaphor may work to identify priority areas for policy intervention, but only if there is an equitable distribution of resources and mobilization efforts to improve consumer food environments. If the structural forces which ration access to land-use planning persist (arguably including the media as gatekeepers to information and producers of message frames) disparities in obesity are likely to widen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary objective is to investigate the main factors contributing to GMS expenditure on pharmaceutical prescribing and projecting this expenditure to 2026. This study is located in the area of pharmacoeconomic cost containment and projections literature. The thesis has five main aims: 1. To determine the main factors contributing to GMS expenditure on pharmaceutical prescribing. 2. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2006 Central Statistics Office (CSO) Census data and 2007 Health Service Executive{Primary Care Reimbursement Service (HSE{PCRS) sample data. 3. To develop a model to project GMS prescribing expenditure in five year intervals to 2026, using 2012 HSE{PCRS population data, incorporating cost containment measures, and 2011 CSO Census data. 4. To investigate the impact of demographic factors and the pharmacology of drugs (Anatomical Therapeutic Chemical (ATC)) on GMS expenditure. 5. To explore the consequences of GMS policy changes on prescribing expenditure and behaviour between 2008 and 2014. The thesis is centered around three published articles and is located between the end of a booming Irish economy in 2007, a recession from 2008{2013, to the beginning of a recovery in 2014. The literature identified a number of factors influencing pharmaceutical expenditure, including population growth, population aging, changes in drug utilisation and drug therapies, age, gender and location. The literature identified the methods previously used in predictive modelling and consequently, the Monte Carlo Simulation (MCS) model was used to simulate projected expenditures to 2026. Also, the literature guided the use of Ordinary Least Squares (OLS) regression in determining demographic and pharmacology factors influencing prescribing expenditure. The study commences against a backdrop of growing GMS prescribing costs, which has risen from e250 million in 1998 to over e1 billion by 2007. Using a sample 2007 HSE{PCRS prescribing data (n=192,000) and CSO population data from 2008, (Conway et al., 2014) estimated GMS prescribing expenditure could rise to e2 billion by2026. The cogency of these findings was impacted by the global economic crisis of 2008, which resulted in a sharp contraction in the Irish economy, mounting fiscal deficits resulting in Ireland's entry to a bailout programme. The sustainability of funding community drug schemes, such as the GMS, came under the spotlight of the EU, IMF, ECB (Trioka), who set stringent targets for reducing drug costs, as conditions of the bailout programme. Cost containment measures included: the introduction of income eligibility limits for GP visit cards and medical cards for those aged 70 and over, introduction of co{payments for prescription items, reductions in wholesale mark{up and pharmacy dispensing fees. Projections for GMS expenditure were reevaluated using 2012 HSE{PCRS prescribing population data and CSO population data based on Census 2011. Taking into account both cost containment measures and revised population predictions, GMS expenditure is estimated to increase by 64%, from e1.1 billion in 2016 to e1.8 billion by 2026, (ConwayLenihan and Woods, 2015). In the final paper, a cross{sectional study was carried out on HSE{PCRS population prescribing database (n=1.63 million claimants) to investigate the impact of demographic factors, and the pharmacology of the drugs, on GMS prescribing expenditure. Those aged over 75 (ẞ = 1:195) and cardiovascular prescribing (ẞ = 1:193) were the greatest contributors to annual GMS prescribing costs. Respiratory drugs (Montelukast) recorded the highest proportion and expenditure for GMS claimants under the age of 15. Drugs prescribed for the nervous system (Escitalopram, Olanzapine and Pregabalin) were highest for those between 16 and 64 years with cardiovascular drugs (Statins) were highest for those aged over 65. Females are more expensive than males and are prescribed more items across the four ATC groups, except among children under 11, (ConwayLenihan et al., 2016). This research indicates that growth in the proportion of the elderly claimants and associated levels of cardiovascular prescribing, particularly for statins, will present difficulties for Ireland in terms of cost containment. Whilst policies aimed at cost containment (co{payment charges, generic substitution, reference pricing, adjustments to GMS eligibility) can be used to curtail expenditure, health promotional programs and educational interventions should be given equal emphasis. Also policies intended to affect physicians prescribing behaviour include guidelines, information (about price and less expensive alternatives) and feedback, and the use of budgetary restrictions could yield savings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Megabenthos plays a major role in the overall energy flow on Arctic shelves, but information on megabenthic secondary production on large spatial scales is scarce. Here, we estimated for the first time megabenthic secondary production for the entire Barents Sea shelf by applying a species-based empirical model to an extensive dataset from the joint Norwegian? Russian ecosystem survey. Spatial patterns and relationships were analyzed within a GIS. The environmental drivers behind the observed production pattern were identified by applying an ordinary least squares regression model. Geographically weighted regression (GWR) was used to examine the varying relationship of secondary production and the environment on a shelfwide scale. Significantly higher megabenthic secondary production was found in the northeastern, seasonally ice-covered regions of the Barents Sea than in the permanently ice-free southwest. The environmental parameters that significantly relate to the observed pattern are bottom temperature and salinity, sea ice cover, new primary production, trawling pressure, and bottom current speed. The GWR proved to be a versatile tool for analyzing the regionally varying relationships of benthic secondary production and its environmental drivers (R² = 0.73). The observed pattern indicates tight pelagic? benthic coupling in the realm of the productive marginal ice zone. Ongoing decrease of winter sea ice extent and the associated poleward movement of the seasonal ice edge point towards a distinct decline of benthic secondary production in the northeastern Barents Sea in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No estudo de séries temporais, os processos estocásticos usuais assumem que as distribuições marginais são contínuas e, em geral, não são adequados para modelar séries de contagem, pois as suas características não lineares colocam alguns problemas estatísticos, principalmente na estimação dos parâmetros. Assim, investigou-se metodologias apropriadas de análise e modelação de séries com distribuições marginais discretas. Neste contexto, Al-Osh and Alzaid (1987) e McKenzie (1988) introduziram na literatura a classe dos modelos autorregressivos com valores inteiros não negativos, os processos INAR. Estes modelos têm sido frequentemente tratados em artigos científicos ao longo das últimas décadas, pois a sua importância nas aplicações em diversas áreas do conhecimento tem despertado um grande interesse no seu estudo. Neste trabalho, após uma breve revisão sobre séries temporais e os métodos clássicos para a sua análise, apresentamos os modelos autorregressivos de valores inteiros não negativos de primeira ordem INAR (1) e a sua extensão para uma ordem p, as suas propriedades e alguns métodos de estimação dos parâmetros nomeadamente, o método de Yule-Walker, o método de Mínimos Quadrados Condicionais (MQC), o método de Máxima Verosimilhança Condicional (MVC) e o método de Quase Máxima Verosimilhança (QMV). Apresentamos também um critério automático de seleção de ordem para modelos INAR, baseado no Critério de Informação de Akaike Corrigido, AICC, um dos critérios usados para determinar a ordem em modelos autorregressivos, AR. Finalmente, apresenta-se uma aplicação da metodologia dos modelos INAR em dados reais de contagem relativos aos setores dos transportes marítimos e atividades de seguros de Cabo Verde.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La vallée du fleuve Saint-Laurent, dans l’est du Canada, est l’une des régions sismiques les plus actives dans l’est de l’Amérique du Nord et est caractérisée par de nombreux tremblements de terre intraplaques. Après la rotation rigide de la plaque tectonique, l’ajustement isostatique glaciaire est de loin la plus grande source de signal géophysique dans l’est du Canada. Les déformations et les vitesses de déformation de la croûte terrestre de cette région ont été étudiées en utilisant plus de 14 ans d’observations (9 ans en moyenne) de 112 stations GPS fonctionnant en continu. Le champ de vitesse a été obtenu à partir de séries temporelles de coordonnées GPS quotidiennes nettoyées en appliquant un modèle combiné utilisant une pondération par moindres carrés. Les vitesses ont été estimées avec des modèles de bruit qui incluent les corrélations temporelles des séries temporelles des coordonnées tridimensionnelles. Le champ de vitesse horizontale montre la rotation antihoraire de la plaque nord-américaine avec une vitesse moyenne de 16,8±0,7 mm/an dans un modèle sans rotation nette (no-net-rotation) par rapport à l’ITRF2008. Le champ de vitesse verticale confirme un soulèvement dû à l’ajustement isostatique glaciaire partout dans l’est du Canada avec un taux maximal de 13,7±1,2 mm/an et un affaissement vers le sud, principalement au nord des États-Unis, avec un taux typique de −1 à −2 mm/an et un taux minimum de −2,7±1,4 mm/an. Le comportement du bruit des séries temporelles des coordonnées GPS tridimensionnelles a été analysé en utilisant une analyse spectrale et la méthode du maximum de vraisemblance pour tester cinq modèles de bruit: loi de puissance; bruit blanc; bruit blanc et bruit de scintillation; bruit blanc et marche aléatoire; bruit blanc, bruit de scintillation et marche aléatoire. Les résultats montrent que la combinaison bruit blanc et bruit de scintillation est le meilleur modèle pour décrire la partie stochastique des séries temporelles. Les amplitudes de tous les modèles de bruit sont plus faibles dans la direction nord et plus grandes dans la direction verticale. Les amplitudes du bruit blanc sont à peu près égales à travers la zone d’étude et sont donc surpassées, dans toutes les directions, par le bruit de scintillation et de marche aléatoire. Le modèle de bruit de scintillation augmente l’incertitude des vitesses estimées par un facteur de 5 à 38 par rapport au modèle de bruit blanc. Les vitesses estimées de tous les modèles de bruit sont statistiquement cohérentes. Les paramètres estimés du pôle eulérien de rotation pour cette région sont légèrement, mais significativement, différents de la rotation globale de la plaque nord-américaine. Cette différence reflète potentiellement les contraintes locales dans cette région sismique et les contraintes causées par la différence des vitesses intraplaques entre les deux rives du fleuve Saint-Laurent. La déformation de la croûte terrestre de la région a été étudiée en utilisant la méthode de collocation par moindres carrés. Les vitesses horizontales interpolées montrent un mouvement cohérent spatialement: soit un mouvement radial vers l’extérieur pour les centres de soulèvement maximal au nord et un mouvement radial vers l’intérieur pour les centres d’affaissement maximal au sud, avec une vitesse typique de 1 à 1,6±0,4 mm/an. Cependant, ce modèle devient plus complexe près des marges des anciennes zones glaciaires. Basées selon leurs directions, les vitesses horizontales intraplaques peuvent être divisées en trois zones distinctes. Cela confirme les conclusions d’autres chercheurs sur l’existence de trois dômes de glace dans la région d’étude avant le dernier maximum glaciaire. Une corrélation spatiale est observée entre les zones de vitesses horizontales intraplaques de magnitude plus élevée et les zones sismiques le long du fleuve Saint-Laurent. Les vitesses verticales ont ensuite été interpolées pour modéliser la déformation verticale. Le modèle montre un taux de soulèvement maximal de 15,6 mm/an au sud-est de la baie d’Hudson et un taux d’affaissement typique de 1 à 2 mm/an au sud, principalement dans le nord des États-Unis. Le long du fleuve Saint-Laurent, les mouvements horizontaux et verticaux sont cohérents spatialement. Il y a un déplacement vers le sud-est d’une magnitude d’environ 1,3 mm/an et un soulèvement moyen de 3,1 mm/an par rapport à la plaque l’Amérique du Nord. Le taux de déformation verticale est d’environ 2,4 fois plus grand que le taux de déformation horizontale intraplaque. Les résultats de l’analyse de déformation montrent l’état actuel de déformation dans l’est du Canada sous la forme d’une expansion dans la partie nord (la zone se soulève) et d’une compression dans la partie sud (la zone s’affaisse). Les taux de rotation sont en moyenne de 0,011°/Ma. Nous avons observé une compression NNO-SSE avec un taux de 3.6 à 8.1 nstrain/an dans la zone sismique du Bas-Saint-Laurent. Dans la zone sismique de Charlevoix, une expansion avec un taux de 3,0 à 7,1 nstrain/an est orientée ENE-OSO. Dans la zone sismique de l’Ouest du Québec, la déformation a un mécanisme de cisaillement avec un taux de compression de 1,0 à 5,1 nstrain/an et un taux d’expansion de 1.6 à 4.1 nstrain/an. Ces mesures sont conformes, au premier ordre, avec les modèles d’ajustement isostatique glaciaire et avec la contrainte de compression horizontale maximale du projet World Stress Map, obtenue à partir de la théorie des mécanismes focaux (focal mechanism method).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Après une période où la valeur des quotas laitiers a été en forte croissance au Québec, un plafond à cette valeur a été fixé à compter de 2007. Ce plafond a eu pour effet de limiter l’offre de quota sur le marché et la croissance de la taille des entreprises laitières québécoises. Cette situation soulève un questionnement sur l’efficience économique, le blocage de la croissance des entreprises les empêchant de bénéficier d’économies de taille, si bien entendu il en existe. En conséquence, cette étude s’intéresse aux économies de taille en production laitière en Amérique du Nord. Les économies de taille des entreprises ont été mesurées à l’aide d’une régression linéaire multiple à partir de certains indicateurs de coût monétaire et non monétaire. Cette analyse comprend quatre strates de taille formées à partir d’un échantillon non aléatoire de 847 entreprises du Québec, de l’État de New York et de la Californie, ainsi qu’à partir d’un groupe d’entreprises efficientes (groupe de tête). Les résultats démontrent l’existence d’économies de taille principalement au niveau des coûts fixes et plus particulièrement des coûts fixes non monétaires. Ils révèlent aussi que les deux indicateurs où l’effet des économies de taille est le plus important sont le coût du travail non rémunéré et l’amortissement. Par ailleurs, lorsque la taille d’une entreprise augmente, les économies de taille supplémentaires réalisées deviennent de moins en moins importantes. Enfin, les résultats indiquent qu’il existe des déséconomies de taille au niveau des coûts d’alimentation. Les résultats obtenus au niveau du groupe de tête vont dans le même sens. Ils confirment également qu’il est possible pour les grandes entreprises efficientes de réaliser des économies de taille pour la plupart des indicateurs de coût. Toutefois, les économies additionnelles que ces entreprises peuvent réaliser sont moins importantes que celles obtenues par les petites entreprises efficientes. Mots clés : Agriculture, production laitière, Amérique du Nord, économies de taille, efficience économique, régression linéaire.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study aims to identify the factors that influence the behavior intention to adopt an academic Information System (SIE), in an environment of mandatory use, applied in the procurement process at the Federal University of Pará (UFPA). For this, it was used a model of innovation adoption and technology acceptance (TAM), focused in attitudes and intentions regarding the behavior intention. The research was conducted a quantitative survey, through survey in a sample of 96 administrative staff of the researched institution. For data analysis, it was used structural equation modeling (SEM), using the partial least squares method (Partial Least Square PLS-PM). As to results, the constructs attitude and subjective norms were confirmed as strong predictors of behavioral intention in a pre-adoption stage. Despite the use of SIE is required, the perceived voluntariness also predicts the behavior intention. Regarding attitude, classical variables of TAM, like as ease of use and perceived usefulness, appear as the main influence of attitude towards the system. It is hoped that the results of this study may provide subsidies for more efficient management of the process of implementing systems and information technologies, particularly in public universities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a great number of evidences showing that education is extremely important in many economic and social dimensions. In Brazil, education is a right guaranteed by the Federal Constitution; however, in the Brazilian legislation the right to the three stages of basic education: Kindergarten, Elementary and High School is better promoted and supported than the right to education at College level. According to educational census data (INEP, 2009), 78% of all enrolments in College education are in private schools, while the reverse is found in High School: 84% of all matriculations are in public schools, which shows a contradiction in the admission into the universities. The Brazilian scenario presents that public universities receive mostly students who performed better and were prepared in elementary and high school education in private schools, while private universities attend students who received their basic education in public schools, which are characterized as low quality. These facts have led researchers to raise the possible determinants of student performance on standardized tests, such as the Brazilian Vestibular exam, to guide the development of policies aimed at equal access to College education. Seeking inspiration in North American models of affirmative action policies, some Brazilian public universities have suggested rate policies to enable and facilitate the entry of "minorities" (blacks, pardos1, natives, people of low income and public school students) to free College education. At the Federal University of the state Rio Grande do Norte (UFRN), the first incentives for candidates from public schools emerged in 2006, being improved and widespread during the last 7 years. This study aimed to analyse and discuss the Argument of Inclution (AI) - the affirmative action policy that provides additional scoring for students from public schools. From an extensive database, the Ordinary Least Squares (OLS) technique was used as well as a Quantile Regression considering as control the variables of personal, socioeconomic and educational characteristics of the candidates from the Brazilian Vestibular exam 2010 of the Federal University of the state Rio Grande do Norte (UFRN). The results demonstrate the importance of this incentive system, besides the magnitude of other variables

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to examine relationships between multiple characteristics of maternal employment, parenting practices, and adolescents’ transition outcomes to young adulthood. The research addressed four main research questions. First, are the characteristics of maternal work (i.e., hours worked, multiple jobs held, work schedules, earnings, and occupation) related to adolescents’ enrollment in post-secondary education, employment, or involvement in neither of these types of activities as young adults? Second, are the work characteristics related to parental involvement and monitoring, and are the parenting practices related to adolescents’ transition outcomes? Third, do parental involvement and monitoring mediate any relationships between the characteristics of maternal employment and adolescents’ transition outcomes? Finally, do any associations between characteristics of maternal employment and parenting practices and adolescents’ transition outcomes vary by poverty status, race/ethnicity, or gender? To address these research questions, secondary data analysis was conducted, using data from the National Longitudinal Survey of Youth (NLSY) from 1998 through 2004. The study sample consisted of 849 youths who were 15 through 17 years of age in either 1998 or 2000, and were 19 through 21 years of age when their transition outcomes in young adulthood were measured four years later. Multinomial logistic and ordinary least squares regression models were estimated to answer the research questions. Study findings indicated that of the maternal work characteristics, mothers’ multiple jobs held, occupation, and work schedule were significantly related to the youths’ transition outcomes. When mothers held multiple jobs for 1 to 25 weeks per year, and when mothers held jobs involving lower levels of occupational complexity, their youths were more likely to experience employment rather than post-secondary education. Adolescents whose mothers worked a standard work schedule were less likely to experience other types of transitions than post-secondary education. With regard to the effects of maternal employment on parenting practices, none of the maternal work variables were related to parental involvement, and only one variable, mothers working less than 40 hours per week, was negatively related to parental monitoring. In addition, when parents were more involved with their youths’ education, the youths were less likely to transition into employment and other types of transitions rather than post-secondary education. The parenting practices did not mediate the relation between the significant work variables (holding multiple jobs, work schedule, and occupation) and youths’ transition outcomes. Finally, none of the interactions between maternal work characteristics and poverty status, race/ethnicity, and gender met the criteria for determining significance; but in a series of sub-group analyses, some differences according to poverty status and gender were found. Despite the lack of mediation and moderation, the findings of this study have important implications for social policy and social work intervention. Based on the findings, suggestions are made in these areas to improve working mothers’ lives and their adolescents’ development and successful transition to adulthood. Finally, directions for future research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study aims to identify the factors that influence the behavior intention to adopt an academic Information System (SIE), in an environment of mandatory use, applied in the procurement process at the Federal University of Pará (UFPA). For this, it was used a model of innovation adoption and technology acceptance (TAM), focused in attitudes and intentions regarding the behavior intention. The research was conducted a quantitative survey, through survey in a sample of 96 administrative staff of the researched institution. For data analysis, it was used structural equation modeling (SEM), using the partial least squares method (Partial Least Square PLS-PM). As to results, the constructs attitude and subjective norms were confirmed as strong predictors of behavioral intention in a pre-adoption stage. Despite the use of SIE is required, the perceived voluntariness also predicts the behavior intention. Regarding attitude, classical variables of TAM, like as ease of use and perceived usefulness, appear as the main influence of attitude towards the system. It is hoped that the results of this study may provide subsidies for more efficient management of the process of implementing systems and information technologies, particularly in public universities