845 resultados para New paradigm
Resumo:
There is an increasing body of evidence that significant and exciting changes are under way in how some organizations are seeing their world and transforming themselves to fit this new vision. The change is so fundamental as to constitute a paradigm shift. There is further evidence that some hospitality firms may be part of this transformation. The author advocates the use of vanguard management in this article
Resumo:
Salutogenesis is now accepted as a part of the contemporary model of disease: an individual is not only affected by pathogenic factors in the environment, but those that promote well-being or salutogenesis. Given that "environment" extends to include the built environment, promotion of salutogenesis has become part of the architectural brief for contemporary healthcare facilities, drawing on an increasing evidence-base. Salutogenesis is inextricably linked with the notion of person-environment "fit". MyRoom is a proposal for an integrated architectural and pervasive computing model, which enhances psychosocial congruence by using real-time data indicative of the individual's physical status to enable the environment of his/her room (colour, light, temperature) to adapt on an on-going basis in response to bio-signals. This work is part of the PRTLI-IV funded programme NEMBES, investigating the use of embedded technologies in the built environment. Different care contexts require variations in the model, and iterative prototyping investigating use in different contexts will progressively lead to the development of a fully-integrated adaptive salutogenic single-room prototype.
Resumo:
In most e-learning scenarios, communication and on-line collaboration is seen as an add-on feature to resource based learning. This paper will endeavour to present a pedagogical framework for inverting this view and putting communities of practice as the basic paradigm for e-learning. It will present an approach currently being used in the development of a virtual Radiopharmacy community, called VirRAD, and will discuss how theory can lead to an instructional design approach to support technologically enhanced learning.(DIPF/Orig.)
Resumo:
Over the last few years, more and more heuristic decision making techniques have been inspired by nature, e.g. evolutionary algorithms, ant colony optimisation and simulated annealing. More recently, a novel computational intelligence technique inspired by immunology has emerged, called Artificial Immune Systems (AIS). This immune system inspired technique has already been useful in solving some computational problems. In this keynote, we will very briefly describe the immune system metaphors that are relevant to AIS. We will then give some illustrative real-world problems suitable for AIS use and show a step-by-step algorithm walkthrough. A comparison of AIS to other well-known algorithms and areas for future work will round this keynote off. It should be noted that as AIS is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from the examples given here.
Resumo:
One of the aspects related to biolaw is that related to security and health. In other words, using the expression of relevant authorities on this subject, “the securitization of health” and, those situations connected with the Security Council labour in the last decades, may constitute an interesting subject. Beginning with the role of the UN blue helmets in many countries where the expansion of HIV/AIDS is usual, followed by the expansion of some diseases in Haiti, together with the Ebola “crisis” in 2014 and connected with the efforts of the World Health Organization to fight against the zika...what is the role played by the United Nations Security Council on this field, trying to establish a relationship between security and health?
Resumo:
The recently developed reference-command tracking version of model predictive static programming (MPSP) is successfully applied to a single-stage closed grinding mill circuit. MPSP is an innovative optimal control technique that combines the philosophies of model predictive control (MPC) and approximate dynamic programming. The performance of the proposed MPSP control technique, which can be viewed as a `new paradigm' under the nonlinear MPC philosophy, is compared to the performance of a standard nonlinear MPC technique applied to the same plant for the same conditions. Results show that the MPSP control technique is more than capable of tracking the desired set-point in the presence of model-plant mismatch, disturbances and measurement noise. The performance of MPSP and nonlinear MPC compare very well, with definite advantages offered by MPSP. The computational speed of MPSP is increased through a sequence of innovations such as the conversion of the dynamic optimization problem to a low-dimensional static optimization problem, the recursive computation of sensitivity matrices and using a closed form expression to update the control. To alleviate the burden on the optimization procedure in standard MPC, the control horizon is normally restricted. However, in the MPSP technique the control horizon is extended to the prediction horizon with a minor increase in the computational time. Furthermore, the MPSP technique generally takes only a couple of iterations to converge, even when input constraints are applied. Therefore, MPSP can be regarded as a potential candidate for online applications of the nonlinear MPC philosophy to real-world industrial process plants. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Background: P300 and steady-state visual evoked potential(SSVEP) approaches have been widely used for brain–computer interface (BCI) systems. However, neither of these approaches can work for all subjects. Some groups have reported that a hybrid BCI that combines two or more approaches might provide BCI functionality to more users. Hybrid P300/SSVEP BCIs have only recently been developed and validated, and very few avenues to improve performance have been explored. New method: The present study compares an established hybrid P300/SSVEP BCIs paradigm to a new paradigm in which shape changing, instead of color changing, is adopted for P300 evocation to decrease the degradation on SSVEP strength. Result: The result shows that the new hybrid paradigm presented in this paper yields much better performance than the normal hybrid paradigm. Comparison with existing method: A performance increase of nearly 20% in SSVEP classification is achieved using the new hybrid paradigm in comparison with the normal hybrid paradigm.Allthe paradigms except the normal hybrid paradigm used in this paper obtain 100% accuracy in P300 classification. Conclusions: The new hybrid P300/SSVEP BCIs paradigm in which shape changing, instead of color changing, could obtain as high classification accuracy of SSVEP as the traditional SSVEP paradigm and could obtain as high classification accuracy of P300 as the traditional P300 paradigm. P300 did not interfere with the SSVEP response using the new hybrid paradigm presented in this paper, which was superior to the normal hybrid P300/SSVEP paradigm.
Resumo:
Immersion and interaction have been identified as key factors influencing the quality of experience in stereoscopic video systems. The work presented here aims to create a new paradigm for 3D Multimedia consumption exploiting these factors in order to increase user involvement. We use a 5-sided CAVETM environment to support 3D panoramic video reproduction, real-time insertion of synthetic objects into the three-dimensional scene and real-time user interaction with the inserted elements. In this paper we describe our system requirements, functionalities, conceptual design and preliminary implementation results emphasizing the most relevant challenges accomplished. The focus is on three main issues: the generation of stereoscopic video panoramas; the synchronous reproduction of immersive 3D video across multiple screens; and, the real-time insertion algorithm implemented for the integration of synthetic objects into the stereoscopic video. These results have been successfully integrated into the graphic engine managing the operation of the CAVETM infrastructure.
Resumo:
This paper discusses a new paradigm of real-time simulation of power systems in which equipment can be interfaced with a real-time digital simulator. In this scheme, one part of a power system can be simulated by using a real-time simulator; while the other part is implemeneted as a physical system. The only interface of the physical system with the computer-based simulator is through data-acquisition system. The physical system is driven by a voltage-source converter (VSC)that mimics the power system simulated in the real-time simulator. In this papar, the VSC operates in a voltage-control mode to track the point of common coupling voltage signal supplied by the digital simulator. This type of splitting a network in two parts and running a real-time simulation with a physical system in parallel is called a power network in loop here. this opens up the possibility of study of interconnection o f one or several distributed generators to a complex power network. The proposed implementation is verified through simulation studies using PSCAD/EMTDC and through hardware implementation on a TMS320G2812 DSP.
Resumo:
It is recognised that individuals do not always respond honestly when completing psychological tests. One of the foremost issues for research in this area is the inability to detect individuals attempting to fake. While a number of strategies have been identified in faking, a commonality of these strategies is the latent role of long term memory. Seven studies were conducted in order to examine whether it is possible to detect the activation of faking related cognitions using a lexical decision task. Study 1 found that engagement with experiential processing styles predicted the ability to fake successfully, confirming the role of associative processing styles in faking. After identifying appropriate stimuli for the lexical decision task (Studies 2A and 2B), Studies 3 to 5 examined whether a cognitive state of faking could be primed and subsequently identified, using a lexical decision task. Throughout the course of these studies, the experimental methodology was increasingly refined in an attempt to successfully identify the relevant priming mechanisms. The results were consistent and robust throughout the three priming studies: faking good on a personality test primed positive faking related words in the lexical decision tasks. Faking bad, however, did not result in reliable priming of negative faking related cognitions. To more completely address potential issues with the stimuli and the possible role of affective priming, two additional studies were conducted. Studies 6A and 6B revealed that negative faking related words were more arousing than positive faking related words, and that positive faking related words were more abstract than negative faking related words and neutral words. Study 7 examined whether the priming effects evident in the lexical decision tasks occurred as a result of an unintentional mood induction while faking the psychological tests. Results were equivocal in this regard. This program of research aligned the fields of psychological assessment and cognition to inform the preliminary development and validation of a new tool to detect faking. Consequently, an implicit technique to identify attempts to fake good on a psychological test has been identified, using long established and robust cognitive theories in a novel and innovative way. This approach represents a new paradigm for the detection of individuals responding strategically to psychological testing. With continuing development and validation, this technique may have immense utility in the field of psychological assessment.
Resumo:
Evaluation of Inagaki N, Kondo K, Yoshinari T, et al. Efficacy and safety of canagliflozin in Japanese patients with type 2 diabetes: a randomized, double-blind, placebo-controlled, 12-week study. Diabetes Obes Metab 2013. [Epub ahead of print] and Cefalu WT, Leiter LA, Yoon KH, et al. Efficacy and safety of canagliflozin versus glimepiride in patients with type 2 diabetes inadequately controlled with metformin (CANTATA-SU): 52 week results from a randomized, double-blind, phase 3 non-inferiority trial. Lancet 2013;382:941-50 INTRODUCTION Inhibition of the sodium-glucose cotransporter 2 (SGLT2), to promote the excretion of glucose, is a new paradigm in the treatment of type 2 diabetes. AREAS COVERED Canagliflozin is an SGLT2 inhibitor, which has been the subject of two recent clinical trials, which are evaluated. EXPERT OPINION Studies with canagliflozin, in subjects with type 2 diabetes, have shown that its use is associated with reductions in HbA1c and body weight and small reductions in blood pressure and triglycerides, while increasing high-density lipoprotein cholesterol and low-density lipoprotein cholesterol. As monotherapy in Japanese subjects, or in comparison with glimepiride in CANTATA-SU (CANagliflozin Treatment and Trial Analysis versus SUlphonylurea), canagliflozin causes a low incidence of hypoglycemia, and this is an advantage over glimepiride. However, one of the disadvantages with canagliflozin, which was also highlighted in CANTATA-SU, is that canagliflozin can cause urogenital infections, which are not observed with other antidiabetic drugs. The Federal Drug Administration has recently approved canagliflozin for use in type 2 diabetes, while directing that a clinical outcome safety trial be undertaken. We are concerned that canagliflozin has been approved for use in type 2 diabetes prior to a clinical outcome study of efficacy being undertaken and without the outcome of further safety testing.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
This paper reviews the development of computational fluid dynamics (CFD) specifically for turbomachinery simulations and with a particular focus on application to problems with complex geometry. The review is structured by considering this development as a series of paradigm shifts, followed by asymptotes. The original S1-S2 blade-blade-throughflow model is briefly described, followed by the development of two-dimensional then three-dimensional blade-blade analysis. This in turn evolved from inviscid to viscous analysis and then from steady to unsteady flow simulations. This development trajectory led over a surprisingly small number of years to an accepted approach-a 'CFD orthodoxy'. A very important current area of intense interest and activity in turbomachinery simulation is in accounting for real geometry effects, not just in the secondary air and turbine cooling systems but also associated with the primary path. The requirements here are threefold: capturing and representing these geometries in a computer model; making rapid design changes to these complex geometries; and managing the very large associated computational models on PC clusters. Accordingly, the challenges in the application of the current CFD orthodoxy to complex geometries are described in some detail. The main aim of this paper is to argue that the current CFD orthodoxy is on a new asymptote and is not in fact suited for application to complex geometries and that a paradigm shift must be sought. In particular, the new paradigm must be geometry centric and inherently parallel without serial bottlenecks. The main contribution of this paper is to describe such a potential paradigm shift, inspired by the animation industry, based on a fundamental shift in perspective from explicit to implicit geometry and then illustrate this with a number of applications to turbomachinery.