945 resultados para EFFECTIVE-MASS THEORY


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Restaurant management and the leadership styles of men and women who serve as hosts to the dining public are the subject of this study. The author asks: What kind of managers are they? What are the operational results of their efforts? Is there a relationship between managerial style and operational outcomes? How are managerial styles themselves related to each other?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What constitutes effective corporate governance? Which director characteristics render boards effective at positively influencing firm-level performance outcomes? This dissertation examines these questions by taking a multilevel, multidisciplinary approach to corporate governance. I explore the individual-, team-, and firm- level factors that enable directors to serve effectively as strategic resources during international expansion. I argue that directors' international experience improves their ability to serve as effective strategic consultants and resource providers to firms during the complex internationalization process. However, unlike prior research, which tends to assume that directors with the potential to provide important resources uniformly do so, I acknowledge contextual factors (i.e. board cohesiveness, strategic relevance of directors' experience) that affect their propensity to actually influence outcomes. I explore these issues in three essays: one review essay and two empirical essays.^ In the first empirical essay, I integrate resource dependence theory with insights from social-psychological research to explore the influence of board capital on firms' cross-border M&A performance. Using a sample of cross-border M&As completed by S&P 500 firms from 2004-2009, I find evidence that directors' depth of international experience is associated with superior pre-deal outcomes. This suggests that boards' deep, market-specific knowledge is valuable during the target selection phase. I further find that directors' breadth of international experience is associated with superior post-deal performance, suggesting that these directors' global mindset helps firms in the post-M&A integration phase. I also find that these relationships are positively moderated by board cohesiveness, measured by boards' internal social ties.^ In the second empirical essay, I explore the boundary conditions of international board capital by examining how the characteristics of firms' internationalization strategy moderate the relationship between board capital and firm performance. Using a panel of 377 S&P 500 firms observed from 2004-2011, I find that boards' depth of international experience and social capital are more important during early stages of internationalization, when firms tend to lack market knowledge and legitimacy in the host markets. On the other hand, I find that breadth of international experience has a stronger relationship with performance when firms' have higher scope of internationalization, when information-processing demands are higher.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery of giant stars in the spectral regions G and K, showing moderate to rapid rotation and single behavior, namely with constant radial velocity, represents one important topic of study in Stellar Astrophysics. Indeed, such anomalous rotation clearly violates the theoretical predictions on the evolution of stellar rotation, since in evolved evolutionary stages is expected that the single stars essentially have low rotation due to the evolutionary expansion. This property is well-established from the observational point of view, with different studies showing that for single giant stars of spectral types G and K values of the rotation are typically smaller than 5kms−1 . This Thesis seeks an effective contribution to solving the paradigm described above, aiming to search for single stars of spectral types G and K with anomalous rotation, tipically rotation of moderate to rapid, in other luminosity classes. In this context, we analyzed a large stellar sample consisting of 2010 apparently single stars of luminosity classes IV, III, II and Ib with spectral types G and K, with rotational velocity v sin i and radial velocity measurements obtained from observations made by CORAVEL spectrometers. As a first result of impact we discovered the presence of anomalous rotators also among subgiants, bright giants and supergiants stars, namelly stars of luminosity classes IV, II and Ib, in contrast to previous studies, that reported anomalous rotators only in the luminosity class III classic giants. Such a finding of great significance because it allows us to analyze the presence of anomalous rotation at different intervals of mass, since the luminosity classes considered here cover a mass range between 0.80 and 20MJ, approximately. In the present survey we discovered 1 subgiant, 9 giants, 2 bright giants and 5 Ib supergiants, in spectral regions G and K, with values of v sin i ≥ 10kms−1 and single behavior. This amount of 17 stars corresponds to a frequency of 0.8% of G and K single evolved stars with anomalous rotation in the mentioned classes of luminosities, listed at the Bright Star Catalog, which is complete to visual magnitude 6.3. Given these new findings, based on a stellar sample complete in visual magnitude, as that of the Bright Star Catalog, we conducted a comparative statistical analysis using the Kolmogorov- Smirnov test, from where we conclude that the distributions of rotational velocity, v sin i, for single evolved stars with anomalous rotation in luminosity classes III and II, are similar to the distributions of v sin i for spectroscopic binary systems with evolved components with the same spectral type and luminosity class. This vii result indicates that the process of coalescence between stars of a binary system might be a possible mechanism to explain the observed abnormal rotation in the referred abnormal rotators, at least among the giants and bright giants, where the rotation in excess would be associated with the transfer of angular momentum for the star resulting from the merger. Another important result of this Thesis concerns the behavior of the infrared emission in most of the stars with anomalous rotation here studied, where 14 stars of the sample tend to have an excess in IR compared with single stars with low rotation, within of their luminosity class. This property represents an additional link in the search for the physical mechanisms responsible for the abnormal observed rotation, since recent theoretical studies show that the accretion of objects of sub-stellar mass, such as brown dwarfs and giant planets, by the hosting star, can significantly raise its rotation, producing also a circumstellar dust disk. This last result seems to point in that direction, since it is not expected that dust disks occurring during the stage of star formation can survive until the stages of subgiants, giants and supergiants Ib. In summary, in this Thesis, besides the discovery of single G and K evolved stars of luminosity classes IV, II and Ib with anomalously high rotation compared to what is predicted by stellar evolution theory, we also present the frequency of these abnormal rotators in a stellar sample complete to visual magnitude 6.3. We also present solid evidence that coalescence processes in stellar binary systems and processes of accretion of brown dwarfs star or giant planets, by the hosting stars, can act as mechanisms responsible for the puzzling phenomenon of anomalous rotation in single evolved stars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgement The first author would like to acknowledge the University of Aberdeen and the Henderson Economics Research Fund for funding his PhD studies in the period 2011-2014 which formed the basis for the research presented in this paper. The first author would also like to acknowledge the Macaulay Development Trust which funds his postdoctoral fellowship with The James Hutton Institute, Aberdeen, Scotland. The authors thank two anonymous referees for valuable comments and suggestions on earlier versions of this paper. All usual caveats apply

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The large intrinsic bandgap of NiO hinders its potential application as a photocatalyst under visible-light irradiation. In this study, we have performed first-principles screened exchange hybrid density functional theory with the HSE06 functional calculations of N- and C-doped NiO to investigate the effect of doping on the electronic structure of NiO. C-doping at an oxygen site induces gap states due to the dopant, the positions of which suggest that the top of the valence band is made up primarily of C 2p-derived states with some Ni 3d contributions, and the lowest-energy empty state is in the middle of the gap. This leads to an effective bandgap of 1.7 eV, which is of potential interest for photocatalytic applications. N-doping induces comparatively little dopant-Ni 3d interactions, but results in similar positions of dopant-induced states, i.e., the top of the valence band is made up of dopant 2p states and the lowest unoccupied state is the empty gap state derived from the dopant, leading to bandgap narrowing. With the hybrid density functional theory (DFT) results available, we discuss issues with the DFT corrected for on-site Coulomb description of these systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many sociopolitical theories to help explain why governments and actors do what they do. Securitization Theory is a process-oriented theory in international relations that focuses on how an actor defines another actor as an “existential threat,” and the resulting responses that can be taken in order to address that threat. While Securitization Theory is an acceptable method to analyze the relationships between actors in the international system, this thesis contends that the proper examination is multi-factorial, focusing on the addition of Role Theory to the analysis. Consideration of Role Theory, which is another international relations theory that explains how an actor’s strategies, relationships, and perceptions by others is based on pre-conceptualized definitions of that actor’s identity, is essential in order to fully explain why an actor might respond to another in a particular way. Certain roles an actor may enact produce a rival relationship with other actors in the system, and it is those rival roles that elicit securitized responses. The possibility of a securitized response lessens when a role or a relationship between roles becomes ambiguous. There are clear points of role rivalry and role ambiguity between Hizb’allah and Iran, which has directly impacted, and continues to impact, how the United States (US) responds to these actors. Because of role ambiguity, the US has still not conceptualized an effective way to deal with Hizb’allah and Iran holistically across all its various areas of operation and in its various enacted roles. It would be overly simplistic to see Hizb’allah and Iran solely through one lens depending on which hemisphere or continent one is observing. The reality is likely more nuanced. Both Role Theory and Securitization theory can help to understand and articulate those nuances. By examining two case studies of Hizb’allah and Iran’s enactment of various roles in both the Middle East and Latin America, the situations where roles cause a securitized response and where the response is less securitized due to role ambiguity will become clear. Using this augmented approach of combining both theories, along with supplementing the manner in which an actor, action, or role is analyzed, will produce better methods for policy-making that will be able to address the more ambiguous activities of Hizb’allah and Iran in these two regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conventional mechanism of fermion mass generation in the Standard Model involves Spontaneous Symmetry Breaking (SSB). In this thesis, we study an alternate mechanism for the generation of fermion masses that does not require SSB, in the context of lattice field theories. Being inherently strongly coupled, this mechanism requires a non-perturbative approach like the lattice approach.

In order to explore this mechanism, we study a simple lattice model with a four-fermion interaction that has massless fermions at weak couplings and massive fermions at strong couplings, but without any spontaneous symmetry breaking. Prior work on this type of mass generation mechanism in 4D, was done long ago using either mean-field theory or Monte-Carlo calculations on small lattices. In this thesis, we have developed a new computational approach that enables us to perform large scale quantum Monte-Carlo calculations to study the phase structure of this theory. In 4D, our results confirm prior results, but differ in some quantitative details of the phase diagram. In contrast, in 3D, we discover a new second order critical point using calculations on lattices up to size $ 60^3$. Such large scale calculations are unprecedented. The presence of the critical point implies the existence of an alternate mechanism of fermion mass generation without any SSB, that could be of interest in continuum quantum field theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While a great amount of attention is being given to the development of nanodevices, both through academic research and private industry, the field is still on the verge. Progress hinges upon the development of tools and components that can precisely control the interaction between light and matter, and that can be efficiently integrated into nano-devices. Nanofibers are one of the most promising candidates for such purposes. However, in order to fully exploit their potential, a more intimate knowledge of how nanofibers interact with single neutral atoms must be gained. As we learn more about the properties of nanofiber modes, and the way they interface with atoms, and as the technology develops that allows them to be prepared with more precisely known properties, they become more and more adaptable and effective. The work presented in this thesis touches on many topics, which is testament to the broad range of applications and high degree of promise that nanofibers hold. For immediate use, we need to fully grasp how they can be best implemented as sensors, filters, detectors, and switches in existing nano-technologies. Areas of interest also include how they might be best exploited for probing atom-surface interactions, single-atom detection and single photon generation. Nanofiber research is also motivated by their potential integration into fundamental cold atom quantum experiments, and the role they can play there. Combining nanofibers with existing optical and quantum technologies is a powerful strategy for advancing areas like quantum computation, quantum information processing, and quantum communication. In this thesis I present a variety of theoretical work, which explores a range of the applications listed above. The first work presented concerns the use of the evanescent fields around a nanofiber to manipulate an existing trapping geometry and therefore influence the centre-of-mass dynamics of the atom. The second work presented explores interesting trapping geometries that can be achieved in the vicinity of a fiber in which just four modes are allowed to propagate. In a third study I explore the use of a nanofiber as a detector of small numbers of photons by calculating the rate of emission into the fiber modes when the fiber is moved along next to a regularly separated array of atoms. Also included are some results from a work in progress, where I consider the scattered field that appears along the nanofiber axis when a small number of atoms trapped along that axis are illuminated orthogonally; some interesting preliminary results are outlined. Finally, in contrast with the rest of the thesis, I consider some interesting physics that can be done in one of the trapping geometries that can be created around the fiber, here I explore the ground states of a phase separated two-component superfluid Bose-Einstein condensate trapped in a toroidal potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What constitutes effective corporate governance? Which director characteristics render boards effective at positively influencing firm-level performance outcomes? This dissertation examines these questions by taking a multilevel, multidisciplinary approach to corporate governance. I explore the individual-, team-, and firm- level factors that enable directors to serve effectively as strategic resources during international expansion. I argue that directors’ international experience improves their ability to serve as effective strategic consultants and resource providers to firms during the complex internationalization process. However, unlike prior research, which tends to assume that directors with the potential to provide important resources uniformly do so, I acknowledge contextual factors (i.e. board cohesiveness, strategic relevance of directors’ experience) that affect their propensity to actually influence outcomes. I explore these issues in three essays: one review essay and two empirical essays. In the first empirical essay, I integrate resource dependence theory with insights from social-psychological research to explore the influence of board capital on firms’ cross-border M&A performance. Using a sample of cross-border M&As completed by S&P 500 firms from 2004-2009, I find evidence that directors’ depth of international experience is associated with superior pre-deal outcomes. This suggests that boards’ deep, market-specific knowledge is valuable during the target selection phase. I further find that directors’ breadth of international experience is associated with superior post-deal performance, suggesting that these directors’ global mindset helps firms in the post-M&A integration phase. I also find that these relationships are positively moderated by board cohesiveness, measured by boards’ internal social ties. In the second empirical essay, I explore the boundary conditions of international board capital by examining how the characteristics of firms’ internationalization strategy moderate the relationship between board capital and firm performance. Using a panel of 377 S&P 500 firms observed from 2004-2011, I find that boards’ depth of international experience and social capital are more important during early stages of internationalization, when firms tend to lack market knowledge and legitimacy in the host markets. On the other hand, I find that breadth of international experience has a stronger relationship with performance when firms’ have higher scope of internationalization, when information-processing demands are higher.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Issues of body image and ability to achieve intimacy are connected to body weight, yet remain largely unexplored and have not been evaluated by gender. The underlying purpose of this research was to determine if avoidant attitudes and perceptions of one’s body may hold implications toward its use in intimate interactions, and if an above average body weight would tend to increase this avoidance. The National Health and Nutrition Examination Survey (NHANES, 1999-2002) finds that 64.5% of US adults are overweight, with 61.9% of women and 67.2% of men. The increasing prevalence of overweight and obesity in men and women shows no reverse trend, nor have prevention and treatment proven effective in the long term. The researcher gathered self-reported age, gender, height and weight data from 55 male and 58 female subjects (determined by a prospective power analysis with a desired medium effect size (r =.30) to determine body mass index (BMI), determining a mean age of 21.6 years and mean BMI of 25.6. Survey instruments consisted of two scales that are germane to the variables being examined. They were (1) Descutner and Thelen of the University of Missouri’s (1991) Fear-of-Intimacy scale and (2) Rosen, Srebnik, Saltzberg, and Wendt’s (1991) Body Image Avoidance Questionnaire. Results indicated that as body mass index increases, fear of intimacy increases (p<0.05) and that as body mass index increases, body image avoidance increases (p<0.05). The relationship that as body image avoidance increases, fear of intimacy increases was not supported, but approached significance at (p<0.07). No differences in these relationships were determined between gender groups. For age, the only observed relationship was that of a difference between scores for age groups [18 to 22 (group 1) and ages 23 to 34 (group 2)] for the relationship of body image avoidance and fear of intimacy (p<0.02). The results suggest that the relationship of body image avoidance and fear of intimacy, as well as age, bear consideration toward the escalating prevalence of overweight and obesity. An integrative approach to body weight that addresses issues of body image and intimacy may prove effective in prevention and treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural and man-made disasters have gained attention at all levels of policy-making in recent years. Emergency management tasks are inherently complex and unpredictable, and often require coordination among multiple organizations across different levels and locations. Effectively managing various knowledge areas and the organizations involved has become a critical emergency management success factor. However, there is a general lack of understanding about how to describe and assess the complex nature of emergency management tasks and how knowledge integration can help managers improve emergency management task performance. The purpose of this exploratory research was first, to understand how emergency management operations are impacted by tasks that are complex and inter-organizational and second, to investigate how knowledge integration as a particular knowledge management strategy can improve the efficiency and effectiveness of the emergency tasks. Three types of specific knowledge were considered: context-specific, technology-specific, and context-and-technology-specific. The research setting was the Miami-Dade Emergency Operations Center (EOC) and the study was based on the survey responses from the participants in past EOC activations related to their emergency tasks and knowledge areas. The data included task attributes related to complexity, knowledge area, knowledge integration, specificity of knowledge, and task performance. The data was analyzed using multiple linear regressions and path analyses, to (1) examine the relationships between task complexity, knowledge integration, and performance, (2) the moderating effects of each type of specific knowledge on the relationship between task complexity and performance, and (3) the mediating role of knowledge integration. As per theory-based propositions, the results indicated that overall component complexity and interactive complexity tend to have a negative effect on task performance. But surprisingly, procedural rigidity tended to have a positive effect on performance in emergency management tasks. Also as per our expectation, knowledge integration had a positive relationship with task performance. Interestingly, the moderating effects of each type of specific knowledge on the relationship between task complexity and performance were varied and the extent of mediation of knowledge integration depended on the dimension of task complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proliferation of weapons of mass destruction (WMD), nuclear, biological and chemical (NBC) is one of the main security challenges facing the international community today. However the new Global Security Strategy of 2016 raises the question of non-proliferation of WMD only as an incidental matter, not addressing directly the threat, a fundamental threat in the regional and global security. This is a clear step backwards for the European common security.