149 resultados para decode and forward
Resumo:
The observation-error covariance matrix used in data assimilation contains contributions from instrument errors, representativity errors and errors introduced by the approximated observation operator. Forward model errors arise when the observation operator does not correctly model the observations or when observations can resolve spatial scales that the model cannot. Previous work to estimate the observation-error covariance matrix for particular observing instruments has shown that it contains signifcant correlations. In particular, correlations for humidity data are more significant than those for temperature. However it is not known what proportion of these correlations can be attributed to the representativity errors. In this article we apply an existing method for calculating representativity error, previously applied to an idealised system, to NWP data. We calculate horizontal errors of representativity for temperature and humidity using data from the Met Office high-resolution UK variable resolution model. Our results show that errors of representativity are correlated and more significant for specific humidity than temperature. We also find that representativity error varies with height. This suggests that the assimilation scheme may be improved if these errors are explicitly included in a data assimilation scheme. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
What are the main causes of international terrorism? Despite the meticulous examination of various candidate explanations, existing estimates still diverge in sign, size, and significance. This article puts forward a novel explanation and supporting evidence. We argue that domestic political instability provides the learning environment needed to successfully execute international terror attacks. Using a yearly panel of 123 countries over 1973–2003, we find that the occurrence of civil wars increases fatalities and the number of international terrorist acts by 45%. These results hold for alternative indicators of political instability, estimators, subsamples, subperiods, and accounting for competing explanations.
Resumo:
The metabolic syndrome may have its origins in thriftiness, insulin resistance and one of the most ancient of all signalling systems, redox. Thriftiness results from an evolutionarily-driven propensity to minimise energy expenditure. This has to be balanced with the need to resist the oxidative stress from cellular signalling and pathogen resistance, giving rise to something we call 'redox-thriftiness'. This is based on the notion that mitochondria may be able to both amplify membrane-derived redox growth signals as well as negatively regulate them, resulting in an increased ATP/ROS ratio. We suggest that 'redox-thriftiness' leads to insulin resistance, which has the effect of both protecting the individual cell from excessive growth/inflammatory stress, while ensuring energy is channelled to the brain, the immune system, and for storage. We also suggest that fine tuning of redox-thriftiness is achieved by hormetic (mild stress) signals that stimulate mitochondrial biogenesis and resistance to oxidative stress, which improves metabolic flexibility. However, in a non-hormetic environment with excessive calories, the protective nature of this system may lead to escalating insulin resistance and rising oxidative stress due to metabolic inflexibility and mitochondrial overload. Thus, the mitochondrially-associated resistance to oxidative stress (and metabolic flexibility) may determine insulin resistance. Genetically and environmentally determined mitochondrial function may define a 'tipping point' where protective insulin resistance tips over to inflammatory insulin resistance. Many hormetic factors may induce mild mitochondrial stress and biogenesis, including exercise, fasting, temperature extremes, unsaturated fats, polyphenols, alcohol, and even metformin and statins. Without hormesis, a proposed redox-thriftiness tipping point might lead to a feed forward insulin resistance cycle in the presence of excess calories. We therefore suggest that as oxidative stress determines functional longevity, a rather more descriptive term for the metabolic syndrome is the 'lifestyle-induced metabolic inflexibility and accelerated ageing syndrome'. Ultimately, thriftiness is good for us as long as we have hormetic stimuli; unfortunately, mankind is attempting to remove all hormetic (stressful) stimuli from his environment.
Resumo:
Coronal mass ejections (CMEs) can be continuously tracked through a large portion of the inner heliosphere by direct imaging in visible and radio wavebands. White light (WL) signatures of solar wind transients, such as CMEs, result from Thomson scattering of sunlight by free electrons and therefore depend on both viewing geometry and electron density. The Faraday rotation (FR) of radio waves from extragalactic pulsars and quasars, which arises due to the presence of such solar wind features, depends on the line-of-sight magnetic field component B ∥ and the electron density. To understand coordinated WL and FR observations of CMEs, we perform forward magnetohydrodynamic modeling of an Earth-directed shock and synthesize the signatures that would be remotely sensed at a number of widely distributed vantage points in the inner heliosphere. Removal of the background solar wind contribution reveals the shock-associated enhancements in WL and FR. While the efficiency of Thomson scattering depends on scattering angle, WL radiance I decreases with heliocentric distance r roughly according to the expression Ir –3. The sheath region downstream of the Earth-directed shock is well viewed from the L4 and L5 Lagrangian points, demonstrating the benefits of these points in terms of space weather forecasting. The spatial position of the main scattering site r sheath and the mass of plasma at that position M sheath can be inferred from the polarization of the shock-associated enhancement in WL radiance. From the FR measurements, the local B ∥sheath at r sheath can then be estimated. Simultaneous observations in polarized WL and FR can not only be used to detect CMEs, but also to diagnose their plasma and magnetic field properties.
Resumo:
During the last 30 years, significant debate has taken place regarding multilevel research. However, the extent to which multilevel research is overtly practiced remains to be examined. This article analyzes 10 years of organizational research within a multilevel framework (from 2001 to 2011). The goals of this article are (a) to understand what has been done, during this decade, in the field of organizational multilevel research and (b) to suggest new arenas of research for the next decade. A total of 132 articles were selected for analysis through ISI Web of Knowledge. Through a broad-based literature review, results suggest that there is equilibrium between the amount of empirical and conceptual papers regarding multilevel research, with most studies addressing the cross-level dynamics between teams and individuals. In addition, this study also found that the time still has little presence in organizational multilevel research. Implications, limitations, and future directions are addressed in the end. Organizations are made of interacting layers. That is, between layers (such as divisions, departments, teams, and individuals) there is often some degree of interdependence that leads to bottom-up and top-down influence mechanisms. Teams and organizations are contexts for the development of individual cognitions, attitudes, and behaviors (top-down effects; Kozlowski & Klein, 2000). Conversely, individual cognitions, attitudes, and behaviors can also influence the functioning and outcomes of teams and organizations (bottom-up effects; Arrow, McGrath, & Berdahl, 2000). One example is when the rewards system of one organization may influence employees’ intention to quit and the existence or absence of extra role behaviors. At the same time, many studies have showed the importance of bottom-up emergent processes that yield higher level phenomena (Bashshur, Hernández, & González-Romá, 2011; Katz-Navon & Erez, 2005; Marques-Quinteiro, Curral, Passos, & Lewis, in press). For example, the affectivity of individual employees may influence their team’s interactions and outcomes (Costa, Passos, & Bakker, 2012). Several authors agree that organizations must be understood as multilevel systems, meaning that adopting a multilevel perspective is fundamental to understand real-world phenomena (Kozlowski & Klein, 2000). However, whether this agreement is reflected in practicing multilevel research seems to be less clear. In fact, how much is known about the quantity and quality of multilevel research done in the last decade? The aim of this study is to compare what has been proposed theoretically, concerning the importance of multilevel research, with what has really been empirically studied and published. First, this article outlines a review of the multilevel theory, followed by what has been theoretically “put forward” by researchers. Second, this article presents what has really been “practiced” based on the results of a review of multilevel studies published from 2001 to 2011 in business and management journals. Finally, some barriers and challenges to true multilevel research are suggested. This study contributes to multilevel research as it describes the last 10 years of research. It quantitatively depicts the type of articles being written, and where we can find the majority of the publications on empirical and conceptual work related to multilevel thinking.
Resumo:
Research into the dark side of customer management and marketing is progressively growing. The marketing landscape today is dominated with suspicion and distrust as a result of practices that include hidden fees, deception and information mishandling. In such a pessimistic economy, marketers must reconceptualise the notion of fairness in marketing and customer management, so that the progress of sophisticated customisation schemes and advancements in marketing can flourish, avoiding further control and imposed regulation. In this article, emerging research is drawn to suggest that existing quality measures of marketing activities, including service, relationships and experiences may not be comprehensive in measuring the relevant things in the social and ethically oriented marketing landscape, and on that basis does not measure the fairness which truly is important in such an economy. The paper puts forward the concept of Fairness Quality (FAIRQUAL), which includes as well as extends on existing thinking behind relationship building, experience creation and other types of customer management practices that are believed to predict consumer intentions. It is proposed that a fairness quality measure will aid marketers in this challenging landscape and economy.
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
In this EUDO CITIZENSHIP Forum Debate, several authors consider the interrelations between eligibility criteria for participation in independence referendum (that may result in the creation of a new independent state) and the determination of putative citizenship ab initio (on day one) of such a state. The kick-off contribution argues for resemblance of an independence referendum franchise and of the initial determination of the citizenry, critically appraising the incongruence between the franchise for the 18 September 2014 Scottish independence referendum, and the blueprint for Scottish citizenship ab initio put forward by the Scottish Government in its 'Scotland's Future' White Paper. Contributors to this debate come from divergent disciplines (law, political science, sociology, philosophy). They reflect on and contest the above claims, both generally and in relation to regional settings including (in addition to Scotland) Catalonia/Spain, Flanders/Belgium, Quebec/Canada, Post-Yugoslavia and Puerto-Rico/USA.
Resumo:
A parameterization of mesoscale eddies in coarse-resolution ocean general circulation models (GCM) is formulated and implemented using a residual-mean formalism. In that framework, mean buoyancy is advected by the residual velocity (the sum of the Eulerian and eddy-induced velocities) and modified by a residual flux which accounts for the diabatic effects of mesoscale eddies. The residual velocity is obtained by stepping forward a residual-mean momentum equation in which eddy stresses appear as forcing terms. Study of the spatial distribution of eddy stresses, derived by using them as control parameters to ‘‘fit’’ the residual-mean model to observations, supports the idea that eddy stresses can be likened to a vertical down-gradient flux of momentum with a coefficient which is constant in the vertical. The residual eddy flux is set to zero in the ocean interior, where mesoscale eddies are assumed to be quasi-adiabatic, but is parameterized by a horizontal down-gradient diffusivity near the surface where eddies develop a diabatic component as they stir properties horizontally across steep isopycnals. The residual-mean model is implemented and tested in the MIT general circulation model. It is shown that the resulting model (1) has a climatology that is superior to that obtained using the Gent and McWilliams parameterization scheme with a spatially uniform diffusivity and (2) allows one to significantly reduce the (spurious) horizontal viscosity used in coarse resolution GCMs.
Resumo:
Magic and Medieval Society presents a thematic approach to the topic of magic and sorcery in western Europe between the eleventh and the fifteenth centuries. It aims to provide readers with the conceptual and documentary tools to reach informed conclusions as to the existence, nature, importance and uses of magic in medieval society. Contrary to some previous approaches, this book argues that magic was inextricably connected to other areas of cultural practice and was found across medieval society: at medieval courts; at universities; and within the Church itself. The book also puts forward the argument that the witch craze was not a medieval phenomenon but rather the product of the Renaissance and the Reformation, and demonstrates how the components for the early-modern persecution of witches were put into place.
Resumo:
This study investigates the financial effects of additions to and deletions from the most well-known social stock index: the MSCI KLD 400. Our study makes use of the unique setting that index reconstitution provides and allows us to bypass possible issues of endogeneity that commonly plague empirical studies of the link between corporate social and financial performance. By examining not only short-term returns but also trading activity, earnings per share, and long-term performance of stocks that are involved in these events, we bring forward evidence of a ‘social index effect’ where unethical transgressions are penalized more heavily than responsibility is rewarded. We find that the addition of a stock to the index does not lead to material changes in its market price, whereas deletions are accompanied by negative cumulative abnormal returns. Trading volumes for deleted stocks are significantly increased on the event date, while the operational performances of the respective firms deteriorate after their deletion from the social index.
Resumo:
Scope Epidemiological and clinical studies have demonstrated that the consumption of red haem-rich meat may contribute to the risk of colorectal cancer. Two hypotheses have been put forward to explain this causal relationship, i.e. N-nitroso compound (NOC) formation and lipid peroxidation (LPO). Methods and Results In this study, the NOC-derived DNA adduct O6-carboxymethylguanine (O6-CMG) and the LPO product malondialdehyde (MDA) were measured in individual in vitro gastrointestinal digestions of meat types varying in haem content (beef, pork, chicken). While MDA formation peaked during the in vitro small intestinal digestion, alkylation and concomitant DNA adduct formation was observed in seven (out of 15) individual colonic digestions using separate faecal inocula. From those, two haem-rich meat digestions demonstrated a significantly higher O6-CMG formation (p < 0.05). MDA concentrations proved to be positively correlated (p < 0.0004) with haem content of digested meat. The addition of myoglobin, a haem-containing protein, to the digestive simulation showed a dose–response association with O6-CMG (p = 0.004) and MDA (p = 0.008) formation. Conclusion The results suggest the haem-iron involvement for both the LPO and NOC pathway during meat digestion. Moreover, results unambiguously demonstrate that DNA adduct formation is very prone to inter-individual variation, suggesting a person-dependent susceptibility to colorectal cancer development following haem-rich meat consumption.
Resumo:
Recently, the original benchmarking methodology of the Sustainable Value approach became subjected to serious debate. While Kuosmanen and Kuosmanen (2009b) critically question its validity introducing productive efficiency theory, Figge and Hahn (2009) put forward that the implementation of productive efficiency theory severely conflicts with the original financial economics perspective of the Sustainable Value approach. We argue that the debate is very confusing because the original Sustainable Value approach presents two largely incompatible objectives. Nevertheless, we maintain that both ways of benchmarking could provide useful and moreover complementary insights. If one intends to present the overall resource efficiency of the firm from the investor's viewpoint, we recommend the original benchmarking methodology. If one on the other hand aspires to create a prescriptive tool setting up some sort of reallocation scheme, we advocate implementation of the productive efficiency theory. Although the discussion on benchmark application is certainly substantial, we should avoid the debate to become accordingly narrowed. Next to the benchmark concern, we see several other challenges considering the development of the Sustainable Value approach: (1) a more systematic resource selection, (2) the inclusion of the value chain and (3) additional analyses related to policy in order to increase interpretative power.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.