805 resultados para decode and forward
Resumo:
During the last 30 years, significant debate has taken place regarding multilevel research. However, the extent to which multilevel research is overtly practiced remains to be examined. This article analyzes 10 years of organizational research within a multilevel framework (from 2001 to 2011). The goals of this article are (a) to understand what has been done, during this decade, in the field of organizational multilevel research and (b) to suggest new arenas of research for the next decade. A total of 132 articles were selected for analysis through ISI Web of Knowledge. Through a broad-based literature review, results suggest that there is equilibrium between the amount of empirical and conceptual papers regarding multilevel research, with most studies addressing the cross-level dynamics between teams and individuals. In addition, this study also found that the time still has little presence in organizational multilevel research. Implications, limitations, and future directions are addressed in the end. Organizations are made of interacting layers. That is, between layers (such as divisions, departments, teams, and individuals) there is often some degree of interdependence that leads to bottom-up and top-down influence mechanisms. Teams and organizations are contexts for the development of individual cognitions, attitudes, and behaviors (top-down effects; Kozlowski & Klein, 2000). Conversely, individual cognitions, attitudes, and behaviors can also influence the functioning and outcomes of teams and organizations (bottom-up effects; Arrow, McGrath, & Berdahl, 2000). One example is when the rewards system of one organization may influence employees’ intention to quit and the existence or absence of extra role behaviors. At the same time, many studies have showed the importance of bottom-up emergent processes that yield higher level phenomena (Bashshur, Hernández, & González-Romá, 2011; Katz-Navon & Erez, 2005; Marques-Quinteiro, Curral, Passos, & Lewis, in press). For example, the affectivity of individual employees may influence their team’s interactions and outcomes (Costa, Passos, & Bakker, 2012). Several authors agree that organizations must be understood as multilevel systems, meaning that adopting a multilevel perspective is fundamental to understand real-world phenomena (Kozlowski & Klein, 2000). However, whether this agreement is reflected in practicing multilevel research seems to be less clear. In fact, how much is known about the quantity and quality of multilevel research done in the last decade? The aim of this study is to compare what has been proposed theoretically, concerning the importance of multilevel research, with what has really been empirically studied and published. First, this article outlines a review of the multilevel theory, followed by what has been theoretically “put forward” by researchers. Second, this article presents what has really been “practiced” based on the results of a review of multilevel studies published from 2001 to 2011 in business and management journals. Finally, some barriers and challenges to true multilevel research are suggested. This study contributes to multilevel research as it describes the last 10 years of research. It quantitatively depicts the type of articles being written, and where we can find the majority of the publications on empirical and conceptual work related to multilevel thinking.
Resumo:
Research into the dark side of customer management and marketing is progressively growing. The marketing landscape today is dominated with suspicion and distrust as a result of practices that include hidden fees, deception and information mishandling. In such a pessimistic economy, marketers must reconceptualise the notion of fairness in marketing and customer management, so that the progress of sophisticated customisation schemes and advancements in marketing can flourish, avoiding further control and imposed regulation. In this article, emerging research is drawn to suggest that existing quality measures of marketing activities, including service, relationships and experiences may not be comprehensive in measuring the relevant things in the social and ethically oriented marketing landscape, and on that basis does not measure the fairness which truly is important in such an economy. The paper puts forward the concept of Fairness Quality (FAIRQUAL), which includes as well as extends on existing thinking behind relationship building, experience creation and other types of customer management practices that are believed to predict consumer intentions. It is proposed that a fairness quality measure will aid marketers in this challenging landscape and economy.
Resumo:
An efficient two-level model identification method aiming at maximising a model׳s generalisation capability is proposed for a large class of linear-in-the-parameters models from the observational data. A new elastic net orthogonal forward regression (ENOFR) algorithm is employed at the lower level to carry out simultaneous model selection and elastic net parameter estimation. The two regularisation parameters in the elastic net are optimised using a particle swarm optimisation (PSO) algorithm at the upper level by minimising the leave one out (LOO) mean square error (LOOMSE). There are two elements of original contributions. Firstly an elastic net cost function is defined and applied based on orthogonal decomposition, which facilitates the automatic model structure selection process with no need of using a predetermined error tolerance to terminate the forward selection process. Secondly it is shown that the LOOMSE based on the resultant ENOFR models can be analytically computed without actually splitting the data set, and the associate computation cost is small due to the ENOFR procedure. Consequently a fully automated procedure is achieved without resort to any other validation data set for iterative model evaluation. Illustrative examples are included to demonstrate the effectiveness of the new approaches.
Resumo:
In this EUDO CITIZENSHIP Forum Debate, several authors consider the interrelations between eligibility criteria for participation in independence referendum (that may result in the creation of a new independent state) and the determination of putative citizenship ab initio (on day one) of such a state. The kick-off contribution argues for resemblance of an independence referendum franchise and of the initial determination of the citizenry, critically appraising the incongruence between the franchise for the 18 September 2014 Scottish independence referendum, and the blueprint for Scottish citizenship ab initio put forward by the Scottish Government in its 'Scotland's Future' White Paper. Contributors to this debate come from divergent disciplines (law, political science, sociology, philosophy). They reflect on and contest the above claims, both generally and in relation to regional settings including (in addition to Scotland) Catalonia/Spain, Flanders/Belgium, Quebec/Canada, Post-Yugoslavia and Puerto-Rico/USA.
Resumo:
A parameterization of mesoscale eddies in coarse-resolution ocean general circulation models (GCM) is formulated and implemented using a residual-mean formalism. In that framework, mean buoyancy is advected by the residual velocity (the sum of the Eulerian and eddy-induced velocities) and modified by a residual flux which accounts for the diabatic effects of mesoscale eddies. The residual velocity is obtained by stepping forward a residual-mean momentum equation in which eddy stresses appear as forcing terms. Study of the spatial distribution of eddy stresses, derived by using them as control parameters to ‘‘fit’’ the residual-mean model to observations, supports the idea that eddy stresses can be likened to a vertical down-gradient flux of momentum with a coefficient which is constant in the vertical. The residual eddy flux is set to zero in the ocean interior, where mesoscale eddies are assumed to be quasi-adiabatic, but is parameterized by a horizontal down-gradient diffusivity near the surface where eddies develop a diabatic component as they stir properties horizontally across steep isopycnals. The residual-mean model is implemented and tested in the MIT general circulation model. It is shown that the resulting model (1) has a climatology that is superior to that obtained using the Gent and McWilliams parameterization scheme with a spatially uniform diffusivity and (2) allows one to significantly reduce the (spurious) horizontal viscosity used in coarse resolution GCMs.
Resumo:
Magic and Medieval Society presents a thematic approach to the topic of magic and sorcery in western Europe between the eleventh and the fifteenth centuries. It aims to provide readers with the conceptual and documentary tools to reach informed conclusions as to the existence, nature, importance and uses of magic in medieval society. Contrary to some previous approaches, this book argues that magic was inextricably connected to other areas of cultural practice and was found across medieval society: at medieval courts; at universities; and within the Church itself. The book also puts forward the argument that the witch craze was not a medieval phenomenon but rather the product of the Renaissance and the Reformation, and demonstrates how the components for the early-modern persecution of witches were put into place.
Resumo:
This study investigates the financial effects of additions to and deletions from the most well-known social stock index: the MSCI KLD 400. Our study makes use of the unique setting that index reconstitution provides and allows us to bypass possible issues of endogeneity that commonly plague empirical studies of the link between corporate social and financial performance. By examining not only short-term returns but also trading activity, earnings per share, and long-term performance of stocks that are involved in these events, we bring forward evidence of a ‘social index effect’ where unethical transgressions are penalized more heavily than responsibility is rewarded. We find that the addition of a stock to the index does not lead to material changes in its market price, whereas deletions are accompanied by negative cumulative abnormal returns. Trading volumes for deleted stocks are significantly increased on the event date, while the operational performances of the respective firms deteriorate after their deletion from the social index.
Resumo:
Scope Epidemiological and clinical studies have demonstrated that the consumption of red haem-rich meat may contribute to the risk of colorectal cancer. Two hypotheses have been put forward to explain this causal relationship, i.e. N-nitroso compound (NOC) formation and lipid peroxidation (LPO). Methods and Results In this study, the NOC-derived DNA adduct O6-carboxymethylguanine (O6-CMG) and the LPO product malondialdehyde (MDA) were measured in individual in vitro gastrointestinal digestions of meat types varying in haem content (beef, pork, chicken). While MDA formation peaked during the in vitro small intestinal digestion, alkylation and concomitant DNA adduct formation was observed in seven (out of 15) individual colonic digestions using separate faecal inocula. From those, two haem-rich meat digestions demonstrated a significantly higher O6-CMG formation (p < 0.05). MDA concentrations proved to be positively correlated (p < 0.0004) with haem content of digested meat. The addition of myoglobin, a haem-containing protein, to the digestive simulation showed a dose–response association with O6-CMG (p = 0.004) and MDA (p = 0.008) formation. Conclusion The results suggest the haem-iron involvement for both the LPO and NOC pathway during meat digestion. Moreover, results unambiguously demonstrate that DNA adduct formation is very prone to inter-individual variation, suggesting a person-dependent susceptibility to colorectal cancer development following haem-rich meat consumption.
Resumo:
Recently, the original benchmarking methodology of the Sustainable Value approach became subjected to serious debate. While Kuosmanen and Kuosmanen (2009b) critically question its validity introducing productive efficiency theory, Figge and Hahn (2009) put forward that the implementation of productive efficiency theory severely conflicts with the original financial economics perspective of the Sustainable Value approach. We argue that the debate is very confusing because the original Sustainable Value approach presents two largely incompatible objectives. Nevertheless, we maintain that both ways of benchmarking could provide useful and moreover complementary insights. If one intends to present the overall resource efficiency of the firm from the investor's viewpoint, we recommend the original benchmarking methodology. If one on the other hand aspires to create a prescriptive tool setting up some sort of reallocation scheme, we advocate implementation of the productive efficiency theory. Although the discussion on benchmark application is certainly substantial, we should avoid the debate to become accordingly narrowed. Next to the benchmark concern, we see several other challenges considering the development of the Sustainable Value approach: (1) a more systematic resource selection, (2) the inclusion of the value chain and (3) additional analyses related to policy in order to increase interpretative power.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.
Resumo:
We discuss the modelling of dielectric responses of amorphous biological samples. Such samples are commonly encountered in impedance spectroscopy studies as well as in UV, IR, optical and THz transient spectroscopy experiments and in pump-probe studies. In many occasions, the samples may display quenched absorption bands. A systems identification framework may be developed to provide parsimonious representations of such responses. To achieve this, it is appropriate to augment the standard models found in the identification literature to incorporate fractional order dynamics. Extensions of models using the forward shift operator, state space models as well as their non-linear Hammerstein-Wiener counterpart models are highlighted. We also discuss the need to extend the theory of electromagnetically excited networks which can account for fractional order behaviour in the non-linear regime by incorporating nonlinear elements to account for the observed non-linearities. The proposed approach leads to the development of a range of new chemometrics tools for biomedical data analysis and classification.
Resumo:
This study investigates the logics or values that shape the social and environmental reporting (SER) and SER assurance (SERA) process. The influence of logics is observed through a study of the conceptualisation and operationalisation of the materiality concept by accounting and non-accounting assurors and their assurance statements. We gathered qualitative data from interviews with both accounting and non-accounting assurors. We analysed the interplay between old and new logics that are shaping materiality as a reporting concept in SER. SER is a rich field in which to study the dynamics of change because it is a voluntary, unregulated, qualitative reporting arena. It has a broad, stakeholder audience, where accounting and non-accounting organisations are in competition. There are three key findings. First, the introduction of a new, stakeholder logic has significantly changed the meaning and role of materiality. Second, a more versatile, performative, social understanding of materiality was portrayed by assurors, with a forward-looking rather than a historic focus. Third, competing logics have encouraged different beliefs about materiality, and practices, to develop. This influenced the way assurors theorised the concept and interpreted outcomes. A patchwork of localised understandings of materiality is developing. Policy implications both in SERA and also in financial audit are explored.
Resumo:
This article proposes a systematic approach to determine the most suitable analogue redesign method to be used for forward-type converters under digital voltage mode control. The focus of the method is to achieve the highest phase margin at the particular switching and crossover frequencies chosen by the designer. It is shown that at high crossover frequencies with respect to switching frequency, controllers designed using backward integration have the largest phase margin; whereas at low crossover frequencies with respect to switching frequency, controllers designed using bilinear integration with pre-warping have the largest phase margins. An algorithm has been developed to determine the frequency of the crossing point where the recommended discretisation method changes. An accurate model of the power stage is used for simulation and experimental results from a Buck converter are collected. The performance of the digital controllers is compared to that of the equivalent analogue controller both in simulation and experiment. Excellent closeness between the simulation and experimental results is presented. This work provides a concrete example to allow academics and engineers to systematically choose a discretisation method.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
This article critically reflects on the widely held view of a causal chain with trust in public authorities impacting technology acceptance via perceived risk. It first puts forward conceptual reason against this view, as the presence of risk is a precondition for trust playing a role in decision making. Second, results from consumer surveys in Italy and Germany are presented that support the associationist model as counter hypothesis. In that view, trust and risk judgments are driven by and thus simply indicators of higher order attitudes toward a certain technology which determine acceptance instead. The implications of these findings are discussed.