876 resultados para Almost Convergence


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Convergence has been a popular theme in applied economics since the seminal papers of Barro (1991) and Barro and Sala-i-Martin (1992). The very notion of convergence quickly becomes problematic from an academic viewpoint however when we try and formalise a framework to think about these issues. In the light of the abundance of available convergence concepts, it would be useful to have a more universal framework that encompassed existing concepts as special cases. Moreover, much of the convergence literature has treated the issue as a zero-one outcome. We argue that it is more sensible and useful for policy decision makers and academic researchers to consider also ongoing convergence over time. Assessing the progress of ongoing convergence is one interesting and important means of evaluating whether the Eastern European New Member Countries (NMC) of the European Union (EU) are getting closer to being deemed “ready” to join the European Monetary Union (EMU), that is, fulfilling the Maastricht convergence criteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From 1992 to 2012 4.4 billion people were affected by disasters with almost 2 trillion USD in damages and 1.3 million people killed worldwide. The increasing threat of disasters stresses the need to provide solutions for the challenges faced by disaster managers, such as the logistical deployment of resources required to provide relief to victims. The location of emergency facilities, stock prepositioning, evacuation, inventory management, resource allocation, and relief distribution have been identified to directly impact the relief provided to victims during the disaster. Managing appropriately these factors is critical to reduce suffering. Disaster management commonly attracts several organisations working alongside each other and sharing resources to cope with the emergency. Coordinating these agencies is a complex task but there is little research considering multiple organisations, and none actually optimising the number of actors required to avoid shortages and convergence. The aim of the this research is to develop a system for disaster management based on a combination of optimisation techniques and geographical information systems (GIS) to aid multi-organisational decision-making. An integrated decision system was created comprising a cartographic model implemented in GIS to discard floodable facilities, combined with two models focused on optimising the decisions regarding location of emergency facilities, stock prepositioning, the allocation of resources and relief distribution, along with the number of actors required to perform these activities. Three in-depth case studies in Mexico were studied gathering information from different organisations. The cartographic model proved to reduce the risk to select unsuitable facilities. The preparedness and response models showed the capacity to optimise the decisions and the number of organisations required for logistical activities, pointing towards an excess of actors involved in all cases. The system as a whole demonstrated its capacity to provide integrated support for disaster preparedness and response, along with the existence of room for improvement for Mexican organisations in flood management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recession in 2008/09 affected almost all the European countries seriously but some of them were hurt to greater extent. The timing of economic downturns can never be appropriate but it found Hungary at the time when it was in a vulnerable condition leading to a prolong struggle to find the way out. However, each company’s own experience and approach in crisis can differ from what the whole economy would explain. This study aims to contribute to the emerging research field regarding the concept of proactive marketing. We investigated the relationship between approach to crisis as an opportunity and market performance. Based on a survey of 173 companies we demonstrated that proactive marketing can lead to better performance but larger companies have the advantage of implementing this strategy more successfully.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this dissertation was to investigate cross-cultural differences in the use of the Internet. Hofstede's model of national culture was employed as the theoretical foundation for the analysis of cross-cultural differences. Davis's technology acceptance model was employed as the theoretical foundation for the analysis of Internet use. ^ Secondary data from an on-line survey of Internet users in 22 countries conducted in April 1997 by the Georgia Tech Research Corporation measured the dependent variables of Internet use and the independent variables of attitudes toward technology. Hofstede's stream of research measured the independent variables of the five dimensions of national culture. ^ Contrary to expectations, regression analyses at the country level of analysis did not detect cultural differences. As expected, regression analyses at the individual level of analysis did detect cultural differences. The results indicated that perceived usefulness was related to the frequency of Internet shopping in the Germanic and Anglo clusters, where masculinity was high. Perceived ease of use was related to the frequency of Internet shopping in the Latin cluster, where uncertainty avoidance was high. Neither perceived usefulness nor perceived ease of use was related to the frequency of Internet shopping in the Nordic cluster, where masculinity and uncertainty avoidance were low. ^ As expected, analysis of variance at the cluster level of analysis indicated that censorship was a greater concern in Germany and Anglo countries, where masculinity was high. Government regulation of the Internet was less preferred in Germany, where power distance was low. Contrary to expectations, concern for transaction security. was lower in the Latin cluster, where uncertainty avoidance was high. Concern for privacy issues was lower in the U.S., where individualism was high. ^ In conclusion, results suggested that Internet users represented a multicultural community, not a standardized virtual community. Based on the findings, specific guidance was provided on how international managers and marketers could develop culturally sensitive strategies for training and promoting Internet services. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We are able to give a complete description of four-dimensional Lie algebras g which satisfy the tame-compatible question of Donaldson for all almost complex structures J on g are completely described. As a consequence, examples are given of (non-unimodular) four-dimensional Lie algebras with almost complex structures which are tamed but not compatible with symplectic forms.? Note that Donaldson asked his question for compact four-manifolds. In that context, the problem is still open, but it is believed that any tamed almost complex structure is in fact compatible with a symplectic form. In this presentation, I will define the basic objects involved and will give some insights on the proof. The key for the proof is translating the problem into a Linear Algebra setting. This is a joint work with Dr. Draghici.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predicting the impacts of environmental change on marine organisms, food webs, and biogeochemical cycles presently relies almost exclusively on short-term physiological studies, while the possibility of adaptive evolution is often ignored. Here, we assess adaptive evolution in the coccolithophore Emiliania huxleyi, a well-established model species in biological oceanography, in response to ocean acidification. We previously demonstrated that this globally important marine phytoplankton species adapts within 500 generations to elevated CO2. After 750 and 1000 generations, no further fitness increase occurred, and we observed phenotypic convergence between replicate populations. We then exposed adapted populations to two novel environments to investigate whether or not the underlying basis for high CO2-adaptation involves functional genetic divergence, assuming that different novel mutations become apparent via divergent pleiotropic effects. The novel environment "high light" did not reveal such genetic divergence whereas growth in a low-salinity environment revealed strong pleiotropic effects in high CO2 adapted populations, indicating divergent genetic bases for adaptation to high CO2. This suggests that pleiotropy plays an important role in adaptation of natural E. huxleyi populations to ocean acidification. Our study highlights the potential mutual benefits for oceanography and evolutionary biology of using ecologically important marine phytoplankton for microbial evolution experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of climate change over South America (SA) has been observed that the combination of high temperatures and rain more temperatures less rainfall, cause different impacts such as extreme precipitation events, favorable conditions for fires and droughts. As a result, these regions face growing threat of water shortage, local or generalized. Thus, the water availability in Brazil depends largely on the weather and its variations in different time scales. In this sense, the main objective of this research is to study the moisture budget through regional climate models (RCM) from Project Regional Climate Change Assessments for La Plata Basin (CLARIS-LPB) and combine these RCM through two statistical techniques in an attempt to improve prediction on three areas of AS: Amazon (AMZ), Northeast Brazil (NEB) and the Plata Basin (LPB) in past climates (1961-1990) and future (2071-2100). The moisture transport on AS was investigated through the moisture fluxes vertically integrated. The main results showed that the average fluxes of water vapor in the tropics (AMZ and NEB) are higher across the eastern and northern edges, thus indicating that the contributions of the trade winds of the North Atlantic and South are equally important for the entry moisture during the months of JJA and DJF. This configuration was observed in all the models and climates. In comparison climates, it was found that the convergence of the flow of moisture in the past weather was smaller in the future in various regions and seasons. Similarly, the majority of the SPC simulates the future climate, reduced precipitation in tropical regions (AMZ and NEB), and an increase in the LPB region. The second phase of this research was to carry out combination of RCM in more accurately predict precipitation, through the multiple regression techniques for components Main (C.RPC) and convex combination (C.EQM), and then analyze and compare combinations of RCM (ensemble). The results indicated that the combination was better in RPC represent precipitation observed in both climates. Since, in addition to showing values be close to those observed, the technique obtained coefficient of correlation of moderate to strong magnitude in almost every month in different climates and regions, also lower dispersion of data (RMSE). A significant advantage of the combination of methods was the ability to capture extreme events (outliers) for the study regions. In general, it was observed that the wet C.EQM captures more extreme, while C.RPC can capture more extreme dry climates and in the three regions studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ACKNOWLEDGMENTS. We thank members of the L.Y. and K.B. laboratories for helpful discussions. This work was supported through the European Research Council Grant StG CA629F04E (to L.Y.); a Harvard University Milton Fund Award (to K.B.); Ruth L. Kirschstein National Research Service Award 1 F32 GM096699 from the NIH (to L.Y.); National Science Foundation Grant IOS-1146465 (to K.B.); NIH National Institute of General Medical Sciences Grant 2R01GM078536 (to D.E.S.); and Biotechnology and Biological Sciences Research Council Grant BB/L000113/1 (to D.E.S.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we highlight the significance and need for conducting context-specific human resource management (HRM) research, by focusing on four critical themes. First, we discuss the need to analyze the convergence-divergence debate on HRM in Asia-Pacific. Next, we present an integrated framework, which would be very useful for conducting cross-national HRM research designed to focus on the key determinants of the dominant national HRM systems in the region. Following this, we discuss the critical challenges facing the HRM function in Asia-Pacific. Finally, we present an agenda for future research by presenting a series of research themes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Healthcare worldwide needs translation of basic ideas from engineering into the clinic. Consequently, there is increasing demand for graduates equipped with the knowledge and skills to apply interdisciplinary medicine/engineering approaches to the development of novel solutions for healthcare. The literature provides little guidance regarding barriers to, and facilitators of, effective interdisciplinary learning for engineering and medical students in a team-based project context. Methods: A quantitative survey was distributed to engineering and medical students and staff in two universities, one in Ireland and one in Belgium, to chart knowledge and practice in interdisciplinary learning and teaching, and of the teaching of innovation. Results: We report important differences for staff and students between the disciplines regarding attitudes towards, and perceptions of, the relevance of interdisciplinary learning opportunities, and the role of creativity and innovation. There was agreement across groups concerning preferred learning, instructional styles, and module content. Medical students showed greater resistance to the use of structured creativity tools and interdisciplinary teams. Conclusions: The results of this international survey will help to define the optimal learning conditions under which undergraduate engineering and medicine students can learn to consider the diverse factors which determine the success or failure of a healthcare engineering solution.