876 resultados para unifying concept
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.
Resumo:
La qualitat de vida s’està convertint en un concepte clau i unificador en l’atenció i educació de les persones amb discapacitat intel•lectual. Així mateix, la percepció actual sobre les persones amb discapacitat ha canviat substancialment. En el moment actual, cal plantejar-se quines són les aplicacions i les implicacions que tenen per l’atenció i educació de les persones amb discapacitat intel•lectual els principis derivats del nou concepte de discapacitat i de la qualitat de vida. Aquesta investigació pretén essencialment elaborar un conjunt d’instruments que permetin avaluar la qualitat de les pràctiques educatives dels centres d’educació especial. Amb aquest objectiu s’ha seguit un procediment estructurat i ordenat en l’elaboració dels instruments com en la seva valoració inicial. Per avaluar la qualitat dels centres d’educació especial es proposen una sèrie qüestionaris tant pels professionals, les famílies i els alumnes. Els resultats indiquen que el conjunt d’instruments d’avaluació permeten recollir informació àmplia i variada de les pràctiques d’un centre d’educació especial, determinar les seves fortaleses i debilitats i servir de base per establir plans de millora estretament relacionats amb el context particular del centre i amb el que es considera una bona pràctica educativa.
Resumo:
L’introduction aux concepts unificateurs dans l’enseignement des mathématiques privilégie typiquement l’approche axiomatique. Il n’est pas surprenant de constater qu’une telle approche tend à une algorithmisation des tâches pour augmenter l’efficacité de leur résolution et favoriser la transparence du nouveau concept enseigné (Chevallard, 1991). Cette réponse classique fait néanmoins oublier le rôle unificateur du concept et n’encourage pas à l’utilisation de sa puissance. Afin d’améliorer l’apprentissage d’un concept unificateur, ce travail de thèse étudie la pertinence d’une séquence didactique dans la formation d’ingénieurs centrée sur un concept unificateur de l’algèbre linéaire: la transformation linéaire (TL). La notion d’unification et la question du sens de la linéarité sont abordées à travers l’acquisition de compétences en résolution de problèmes. La séquence des problèmes à résoudre a pour objet le processus de construction d’un concept abstrait (la TL) sur un domaine déjà mathématisé, avec l’intention de dégager l’aspect unificateur de la notion formelle (Astolfi y Drouin, 1992). À partir de résultats de travaux en didactique des sciences et des mathématiques (Dupin 1995; Sfard 1991), nous élaborons des situations didactiques sur la base d’éléments de modélisation, en cherchant à articuler deux façons de concevoir l’objet (« procédurale » et « structurale ») de façon à trouver une stratégie de résolution plus sûre, plus économique et réutilisable. En particulier, nous avons cherché à situer la notion dans différents domaines mathématiques où elle est applicable : arithmétique, géométrique, algébrique et analytique. La séquence vise à développer des liens entre différents cadres mathématiques, et entre différentes représentations de la TL dans les différents registres mathématiques, en s’inspirant notamment dans cette démarche du développement historique de la notion. De plus, la séquence didactique vise à maintenir un équilibre entre le côté applicable des tâches à la pratique professionnelle visée, et le côté théorique propice à la structuration des concepts. L’étude a été conduite avec des étudiants chiliens en formation au génie, dans le premier cours d’algèbre linéaire. Nous avons mené une analyse a priori détaillée afin de renforcer la robustesse de la séquence et de préparer à l’analyse des données. Par l’analyse des réponses au questionnaire d’entrée, des productions des équipes et des commentaires reçus en entrevus, nous avons pu identifier les compétences mathématiques et les niveaux d’explicitation (Caron, 2004) mis à contribution dans l’utilisation de la TL. Les résultats obtenus montrent l’émergence du rôle unificateur de la TL, même chez ceux dont les habitudes en résolution de problèmes mathématiques sont marquées par une orientation procédurale, tant dans l’apprentissage que dans l’enseignement. La séquence didactique a montré son efficacité pour la construction progressive chez les étudiants de la notion de transformation linéaire (TL), avec le sens et les propriétés qui lui sont propres : la TL apparaît ainsi comme un moyen économique de résoudre des problèmes extérieurs à l’algèbre linéaire, ce qui permet aux étudiants d’en abstraire les propriétés sous-jacentes. Par ailleurs, nous avons pu observer que certains concepts enseignés auparavant peuvent agir comme obstacles à l’unification visée. Cela peut ramener les étudiants à leur point de départ, et le rôle de la TL se résume dans ces conditions à révéler des connaissances partielles, plutôt qu’à guider la résolution.
Resumo:
Cette thèse cherche à documenter le changement culturel vécu par les Algonquins des lacs Abitibi et Témiscamingue au 19e siècle, à partir d'une démarche ethnohistorique. Le changement culturel autochtone est souvent perçu comme ayant été défavorable et produit sous l'effet de la contrainte. Cette thèse montre que le contexte changeant du 19e siècle a offert des opportunités permettant aux Algonquins d'apporter des solutions nouvelles à d'anciens problèmes pour lesquels il n'existait pas de solution autrefois. Bien que cette période corresponde à l'amorce de la dépossession territoriale des Algonquins, on n'y observe pas, à cette période, de stress environnemental ayant pu induire ces changements. Cette étude se fait sous le concept unificateur du pouvoir. Le pouvoir dans la conception algonquine est une qualité intrinsèque, qui peut fluctuer au cours d'une vie. Il est manifeste par l'habileté d'une personne à faire fléchir le sort en sa faveur et à déjouer l'infortune. Il est également transmissible à l'intérieur de certaines familles. Au début du 19e siècle, certaines personnes étaient réputées détenir beaucoup de pouvoir et leur leadership ne pouvait être remis en question. Ainsi, il existait des chefs héréditaires et des chamanes puissants et parfois craints. J'avance qu'après l'introduction du catholicisme, les Algonquins ont utilisé cette religion pour se prémunir contre un pouvoir perçu comme démesuré. Ils ont également pris l'initiative de mettre en œuvre le mode de scrutin prévu à la Loi sur les Indiens afin de désigner des chefs choisis pour leur compétence et éloigner des personnes craintes ou incompétentes. Ils ont également mis en place des mesures de protection, évitant de concentrer trop de pouvoir entre les mains d'un chef élu. Le siècle a vu émerger une place plus grande pour la bande comme entité sociale algonquine. En effet, les rassemblements estivaux au niveau de la bande se sont amorcés à cette période, d'abord en conséquence de la participation aux brigades de transport des marchandises, puis à la tenue estivale des missions catholiques. À la fin du siècle, les chefs se sont vus attribuer un pouvoir de représentation politique très grand et ont joué un rôle social grandissant au niveau de la bande.
Resumo:
Objetivo: Describir el comportamiento del desprendimiento del vítreo posterior (DVP) en pacientes expuestos a cirugía de catarata mediante la biomicroscopia, la ecografía ocular y la tomografía de coherencia óptica macular. Materiales y métodos: Se realizó un estudio descriptivo, una serie de casos clínicos de 13 pacientes expuestos a cirugía de catarata en la Fundación Oftalmológica Nacional entre febrero a julio de 2015, con seguimiento a 12 meses. Durante 6 visitas se les realizó toma de agudeza visual mejor corregida y biomicroscopía. Tambíen se les realizó ecografia ocular y tomografia de coherencia óptica macular. Resultados: El porcentaje de DVP por biomicroscopia cambió desde un 7.7% a un 38.4%. El porcentaje de DVP por ecografía en el área nasal cambió de 92.3% a 76.9%. En el área temporal la tasa de DVP cambió de 84.6% y a 76.9%. En al área superior se mantuvo en un 61.5%. En el área inferior varió de un 69.2% a un 76.9%. Y por último, en el área macular de un 53.8% a un 76.9%. El porcentaje de DVP por OCT cambio desde un 69.2% a un 76.9%, en la visita cero y la visita cuatro, respectivamente. Conclusiones: La cirugía de catarata acelera el proceso del DVP. Hubo una progresión del DVP según la biomicroscopia y el OCT, la ecografía no la consideramos una herramiento eficaz para describir la progresión del DVP.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coccolithophores are a group of unicellular phytoplankton species whose ability to calcify has a profound influence on biogeochemical element cycling. Calcification rates are controlled by a large variety of biotic and abiotic factors. Among these factors, carbonate chemistry has gained considerable attention during the last years as coccolithophores have been identified to be particularly sensitive to ocean acidification. Despite intense research in this area, a general concept harmonizing the numerous and sometimes (seemingly) contradictory responses of coccolithophores to changing carbonate chemistry is still lacking to date. Here, we present the "substrate-inhibitor concept" which describes the dependence of calcification rates on carbonate chemistry speciation. It is based on observations that calcification rate scales positively with bicarbonate (HCO3-), the primary substrate for calcification, and carbon dioxide (CO2), which can limit cell growth, whereas it is inhibited by protons (H+). This concept was implemented in a model equation, tested against experimental data, and then applied to understand and reconcile the diverging responses of coccolithophorid calcification rates to ocean acidification obtained in culture experiments. Furthermore, we (i) discuss how other important calcification-influencing factors (e.g. temperature and light) could be implemented in our concept and (ii) embed it in Hutchinson's niche theory, thereby providing a framework for how carbonate chemistry-induced changes in calcification rates could be linked with changing coccolithophore abundance in the oceans. Our results suggest that the projected increase of H+ in the near future (next couple of thousand years), paralleled by only a minor increase of inorganic carbon substrate, could impede calcification rates if coccolithophores are unable to fully adapt. However, if calcium carbonate (CaCO3) sediment dissolution and terrestrial weathering begin to increase the oceans' HCO3- and decrease its H+ concentrations in the far future (10 -100 kyears), coccolithophores could find themselves in carbonate chemistry conditions which may be more favorable for calcification than they were before the Anthropocene.
Resumo:
We introduce a spin-charge conductance matrix as a unifying concept underlying charge and spin transport within the framework of the Landauer-Buttiker conductance formula. It turns out that the spin-charge conductance matrix provides a natural and gauge covariant description for electron transport through nanoscale electronic devices. We demonstrate that the charge and spin conductances are gauge invariant observables which characterize transport phenomena arising from spin-dependent scattering. Tunnelling through a single magnetic atom is discussed to illustrate our theory.
Resumo:
Computer-based, socio-technical systems projects are frequently failures. In particular, computer-based information systems often fail to live up to their promise. Part of the problem lies in the uncertainty of the effect of combining the subsystems that comprise the complete system; i.e. the system's emergent behaviour cannot be predicted from a knowledge of the subsystems. This paper suggests uncertainty management is a fundamental unifying concept in analysis and design of complex systems and goes on to indicate that this is due to the co-evolutionary nature of the requirements and implementation of socio-technical systems. The paper shows a model of the propagation of a system change that indicates that the introduction of two or more changes over time can cause chaotic emergent behaviour.
Resumo:
Two concepts in rural economic development policy have been the focus of much research and policy action: the identification and support of clusters or networks of firms and the availability and adoption by rural businesses of Information and Communication Technologies (ICT). From a theoretical viewpoint these policies are based on two contrasting models, with clustering seen as a process of economic agglomeration, and ICT-mediated communication as a means of facilitating economic dispersion. The study’s conceptual framework is based on four interrelated elements: location, interaction, knowledge, and advantage, together with the concept of networks which is employed as an operationally and theoretically unifying concept. The research questions are developed in four successive categories: Policy, Theory, Networks, and Method. The questions are approached using a study of two contrasting groups of rural small businesses in West Cork, Ireland: (a) Speciality Foods, and (b) firms in Digital Products and Services. The study combines Social Network Analysis (SNA) with Qualitative Thematic Analysis, using data collected from semi-structured interviews with 58 owners or managers of these businesses. Data comprise relational network data on the firms’ connections to suppliers, customers, allies and competitors, together with linked qualitative data on how the firms established connections, and how tacit and codified knowledge was sourced and utilised. The research finds that the key characteristics identified in the cluster literature are evident in the sample of Speciality Food businesses, in relation to flows of tacit knowledge, social embedding, and the development of forms of social capital. In particular the research identified the presence of two distinct forms of collective social capital in this network, termed “community” and “reputation”. By contrast the sample of Digital Products and Services businesses does not have the form of a cluster, but matches more closely to dispersive models, or “chain” structures. Much of the economic and social structure of this set of firms is best explained in terms of “project organisation”, and by the operation of an individual rather than collective form of “reputation”. The rural setting in which these firms are located has resulted in their being service-centric, and consequently they rely on ICT-mediated communication in order to exchange tacit knowledge “at a distance”. It is this factor, rather than inputs of codified knowledge, that most strongly influences their operation and their need for availability and adoption of high quality communication technologies. Thus the findings have applicability in relation to theory in Economic Geography and to policy and practice in Rural Development. In addition the research contributes to methodological questions in SNA, and to methodological questions about the combination or mixing of quantitative and qualitative methods.
Resumo:
Multi-gas approaches to climate change policies require a metric establishing ‘equivalences’ among emissions of various species. Climate scientists and economists have proposed four kinds of such metrics and debated their relative merits. We present a unifying framework that clarifies the relationships among them. We show, as have previous authors, that the global warming potential (GWP), used in international law to compare emissions of greenhouse gases, is a special case of the global damage potential (GDP), assuming (1) a finite time horizon, (2) a zero discount rate, (3) constant atmospheric concentrations, and (4) impacts that are proportional to radiative forcing. Both the GWP and GDP follow naturally from a cost–benefit framing of the climate change issue. We show that the global temperature change potential (GTP) is a special case of the global cost potential (GCP), assuming a (slight) fall in the global temperature after the target is reached. We show how the four metrics should be generalized if there are intertemporal spillovers in abatement costs, distinguishing between private (e.g., capital stock turnover) and public (e.g., induced technological change) spillovers. Both the GTP and GCP follow naturally from a cost-effectiveness framing of the climate change issue. We also argue that if (1) damages are zero below a threshold and (2) infinitely large above a threshold, then cost-effectiveness analysis and cost–benefit analysis lead to identical results. Therefore, the GCP is a special case of the GDP. The UN Framework Convention on Climate Change uses the GWP, a simplified cost–benefit concept. The UNFCCC is framed around the ultimate goal of stabilizing greenhouse gas concentrations. Once a stabilization target has been agreed under the convention, implementation is clearly a cost-effectiveness problem. It would therefore be more consistent to use the GCP or its simplification, the GTP.
Resumo:
In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.