959 resultados para Daniels, Norm
Resumo:
The purpose of this article is to investigate in which ways multi-level actor cooperation advances national and local implementation processes of human rights norms in weak-state contexts. Examining the cases of women’s rights in Bosnia and Herzegovina and children’s rights in Bangladesh, we comparatively point to some advantages and disadvantages cooperative relations between international organisations, national governments and local NGOs can entail. Whereas these multi-level actor constellations (MACs) usually initiate norm implementation processes reliably and compensate governmental deficits, they are not always sustainable in the long run. If international organisations withdraw support from temporary missions or policy projects, local NGOs are not able to perpetuate implementation activities if state capacities have not been strengthened by MACs. Our aim is to highlight functions of local agency within multi-level cooperation and to critically raise sustainability issues in human rights implementation to supplement norm research in International Relations.
Resumo:
Analysis of risk measures associated with price series data movements and its predictions are of strategic importance in the financial markets as well as to policy makers in particular for short- and longterm planning for setting up economic growth targets. For example, oilprice risk-management focuses primarily on when and how an organization can best prevent the costly exposure to price risk. Value-at-Risk (VaR) is the commonly practised instrument to measure risk and is evaluated by analysing the negative/positive tail of the probability distributions of the returns (profit or loss). In modelling applications, least-squares estimation (LSE)-based linear regression models are often employed for modeling and analyzing correlated data. These linear models are optimal and perform relatively well under conditions such as errors following normal or approximately normal distributions, being free of large size outliers and satisfying the Gauss-Markov assumptions. However, often in practical situations, the LSE-based linear regression models fail to provide optimal results, for instance, in non-Gaussian situations especially when the errors follow distributions with fat tails and error terms possess a finite variance. This is the situation in case of risk analysis which involves analyzing tail distributions. Thus, applications of the LSE-based regression models may be questioned for appropriateness and may have limited applicability. We have carried out the risk analysis of Iranian crude oil price data based on the Lp-norm regression models and have noted that the LSE-based models do not always perform the best. We discuss results from the L1, L2 and L∞-norm based linear regression models. ACM Computing Classification System (1998): B.1.2, F.1.3, F.2.3, G.3, J.2.
Resumo:
2000 Mathematics Subject Classification: 53C42, 53C55.
Resumo:
2000 Mathematics Subject Classification: 47A10, 47A12, 47A30, 47B10, 47B20, 47B37, 47B47, 47D50.
Resumo:
In this paper, we first present a simple but effective L1-norm-based two-dimensional principal component analysis (2DPCA). Traditional L2-norm-based least squares criterion is sensitive to outliers, while the newly proposed L1-norm 2DPCA is robust. Experimental results demonstrate its advantages. © 2006 IEEE.
Resumo:
Tensor analysis plays an important role in modern image and vision computing problems. Most of the existing tensor analysis approaches are based on the Frobenius norm, which makes them sensitive to outliers. In this paper, we propose L1-norm-based tensor analysis (TPCA-L1), which is robust to outliers. Experimental results upon face and other datasets demonstrate the advantages of the proposed approach. © 2006 IEEE.
Resumo:
This study explores how great powers not allied with the United States formulate their grand strategies in a unipolar international system. Specifically, it analyzes the strategies China and Russia have developed to deal with U.S. hegemony by examining how Moscow and Beijing have responded to American intervention in Central Asia. The study argues that China and Russia have adopted a soft balancing strategy of to indirectly balance the United States at the regional level. This strategy uses normative capabilities such as soft power, alternative institutions and regionalization to offset the overwhelming material hardware of the hegemon. The theoretical and methodological approach of this dissertation is neoclassical realism. Chinese and Russian balancing efforts against the United States are based on their domestic dynamics as well as systemic constraints. Neoclassical realism provides a bridge between the internal characteristics of states and the environment which those states are situated. Because China and Russia do not have the hardware (military or economic power) to directly challenge the United States, they must resort to their software (soft power and norms) to indirectly counter American preferences and set the agenda to obtain their own interests. Neoclassical realism maintains that soft power is an extension of hard power and a reflection of the internal makeup of states. The dissertation uses the heuristic case study method to demonstrate the efficacy of soft balancing. Such case studies help to facilitate theory construction and are not necessarily the demonstrable final say on how states behave under given contexts. Nevertheless, it finds that China and Russia have increased their soft power to counterbalance the United States in certain regions of the world, Central Asia in particular. The conclusion explains how soft balancing can be integrated into the overall balance-of-power framework to explain Chinese and Russian responses to U.S. hegemony. It also suggests that an analysis of norms and soft power should be integrated into the study of grand strategy, including both foreign policy and military doctrine.
Resumo:
In the new health paradigm, the connotation of health has extended beyond the measures of morbidity and mortality to include wellness and quality of life. Comprehensive assessments of health go beyond traditional biological indicators to include measures of physical and mental health status, social role-functioning, and general health perceptions. To meet these challenges, tools for assessment and outcome evaluation are being designed to collect information about functioning and well-being from the individual's point of view.^ The purpose of this study was to profile the physical and mental health status of a sample of county government employees against U.S. population norms. A second purpose of the study was to determine if significant relationships existed between respondent characteristics and personal health practices, lifestyle and other health how the tools and methods used in this investigation can be used to guide program development and facilitate monitoring of health promotion initiatives.^ The SF-12 Health Survey (Ware, Kosinski, & Keller, 1995), a validated measure of health status, was administered to a convenience sample of 450 employees attending one of nine health fairs at an urban worksite. The instrument has been utilized nationally which enabled a comparative analysis of findings of this study with national results.^ Results from this study demonstrated that several respondent characteristics and personal health practices were associated with a greater percentage of physical and/or mental scale scores that were significantly "worse" or significantly "better" than the general population. Respondent characteristics that were significantly related to the SF-12 physical and/or mental health scale scores were gender, age, education, ethnicity, and income status. Personal health practices that were significantly related to SF-12 physical and/or mental scale scores were frequency of vigorous exercise, presence of chronic illness, being at one's prescribed height and weight, eating breakfast, smoking and drinking status. This study provides an illustration of the methods used to analyze and interpret SF-12 Health Survey data, using norm-based interpretation guidelines which are useful for purposes of program development and collecting information on health at the community level. ^
Resumo:
27 pages, 6 figures
Resumo:
1960-1961 Miss Lincoln
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
Rezension von: Norm Friesen: The Place of the Classroom and the Space of the Screen, Relational Pedagogy and Internet Technology, New York, Bern etc.: Lang 2011 (183 S.; ISBN 978-1-4331-0959-1)
Resumo:
We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.