992 resultados para General Information Theory


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The information needs of parents of children with end stage renal failure (ESRF) or with insulin dependent diabetes mellitus (IDDM) were assessed by questionnaires over a 2-year period. Questionnaires were posted on seven occasions at 4-monthly intervals and were sent to both mothers and fathers. Most information needs were reported to be for detailed test results, for new information about the condition and about the child's future social development. Questions responsible for the three highest scores were concerned with the future: the child's fertility; their social, career and marriage prospects; and the hope for a new improved treatment. For the IDDM mothers, scores were significantly different depending on age of the child (P = 0.02). Change in treatment mode had no significant effect on the information needs of parents of children with ESRF (P = 0.81). Occupation was significantly associated with the mean general information needs scores for parents, with occupations of a lower socioeconomic status associated with higher information needs scores. There were no significant differences between the reported mean general information needs scores of parents of children with ESRF and of parents of children with IDDM (P = 0.69) or between mothers and fathers mean general information needs scores (P = 0.58). CONCLUSION: Multidisciplinary team members need to tailor information to the needs of the individual families and be sensitive to socioeconomic factors and communication issues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sustainable development support, balanced scorecard development and business process modeling are viewed from the position of systemology. Extensional, intentional and potential properties of a system are considered as necessary to satisfy functional requirements of a meta-system. The correspondence between extensional, intentional and potential properties of a system and sustainable, unsustainable, crisis and catastrophic states of a system is determined. The inaccessibility cause of the system mission is uncovered. The correspondence between extensional, intentional and potential properties of a system and balanced scorecard perspectives is showed. The IDEF0 function modeling method is checked against balanced scorecard perspectives. The correspondence between balanced scorecard perspectives and IDEF0 notations is considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the agrifood sector, the explosive increase in information about environmental sustainability, often in uncoordinated information systems, has created a new form of ignorance ('meta-ignorance') that diminishes the effectiveness of information on decision-makers. Flows of information are governed by informal and formal social arrangements that we can collectively call Informational Institutions. In this paper, we have reviewed the recent literature on such institutions. From the perspectives of information theory and new institutional economics, current informational institutions are increasing the information entropy of communications concerning environmental sustainability and stakeholders' transaction costs of using relevant information. In our view this reduces the effectiveness of informational governance. Future research on informational governance should explicitly address these aspects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A study was conducted to test the therapeutic effects of assessment feedback on rapport-building and self-enhancement variables (self-verification, self-discovery, self-esteem), as well as symptomatology. Assessment feedback was provided in the form of interpretive information based on the results of the Millon Clinical Multiaxial Inventory-III (MCMI-III). Participants (N = 89) were randomly assigned to three groups: a Feedback group, a Reflective-Counseling group, and a No-Feedback group. The Feedback group was provided with assessment feedback, the Reflective-Counseling group was asked to comment on the meaning of the taking the MCMI-III, the No-Feedback group received general information about the MCMI-III. Results revealed that assessment feedback, when provided in the form of interpretive interpretation positively affects rapport-building and self-enhancement variables (self-verification and self-discovery). No significant results were found in terms of self-esteem or symptom decrease as a function of feedback. However, a significant decrease in symptoms across groups was found. Results indicate that assessment feedback in the form of interpretive information can be used as a starting point in therapy. Implications of the findings are discussed with respect to theory and clinical practice. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A study was conducted to test the therapeutic effects of assessment feedback on rapport-building and self-enhancement variables (self-verification, self-discovery, self-esteem), as well as symptomatology. Assessment feedback was provided in the form of interpretive information based on the results of the Millon Clinical Multiaxial Inventory- III (MCMI-III). Participants (N = 89) were randomly assigned to three groups: a Feedback group, a Reflective-Counseling group, and a No-Feedback group. The Feedback group was provided with assessment feedback, the Reflective-Counseling group was asked to comment on the meaning of the taking the MCMI-III, the No- Feedback group received general information about the MCMI-III. Results revealed that assessment feedback, when provided in the form of interpretive interpretation positively affects rapport-building and self-enhancement variables (self-verification and self-discovery). No significant results were found in terms of self-esteem or symptom decrease as a function of feedback. However, a significant decrease in symptoms across groups was found. Results indicate that assessment feedback in the form of interpretive information can be used as a starting point in therapy. Implications of the findings are discussed with respect to theory and clinical practice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recognizing neonatal pain is a challenge for nurses working with newborns due to the complexity of the pain phenomenon. Pain is subjective, and infants lack the ability to communicate, and their pain is difficult to recognize. The purpose of this study is to determine the effectiveness of education on the NICU nurses' ability to assess neonatal pain. With a better understanding of pain theory and the effects of pain on the newborn the nurse will be better able to assess newborns with pain. Designed as a quasi-experimental one-group pretest and posttest study, the data was collected on a convenience sample of 49 registered nurses employed in the neonatal and special care nursery units at a Childrens Hospital in the Miami area. The nurses were surveyed on the assessment of neonatal pain using the General Information and Pain Sensitivity Questionnaire. After the initial survey, the nurses were inserviced on neonatal pain assessment using a one hour inservice education program. One week after the intervention the nurse was asked to complete the questionnaire again. Data analysis involved comparision of pre and post intervention findings using descriptive methods, t test, correlation coefficients, and ANOVA , where applicable. Findings revealed a significant ( p=.006) increase in nurse's knowledge of neonatal pain assessment after completing the educational inservice when comparing the pre-test and post-test results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The information constitutes one of the most valuable strategic assets for the organization. However, the organizational environment in which it is inserted is very complex and heterogeneous, making emerging issues relevant to the Governance of information technology (IT) and Information Security. Academic Studies and market surveys indicate that the origin of most accidents with the information assets is the behavior of people organization itself rather than external attacks. Taking as a basis the promotion of a culture of safety among users and ensuring the protection of information in their properties of confidentiality, integrity and availability, organizations must establish its Information Security Policy (PSI). This policy is to formalise the guidelines in relation to the security of corporate information resources, in order to avoid that the asset vulnerabilities are exploited by threats and can bring negative consequences to the business. But, for the PSI being effective, it is required that the user have readiness to accept and follow the procedures and safety standards. In the light of this context, the present study aims to investigate what are the motivators extrinsic and intrinsic that affect the willingness of the user to be in accordance with the organization's security policies. The theoretical framework addresses issues related to IT Governance, Information Security, Theory of deterrence, Motivation and Behavior Pro-social. It was created a theoretical model based on the studies of Herath and Rao (2009) and D'Arcy, Hovav and Galletta (2009) that are based on General Deterrence Theory and propose the following influencing factors in compliance with the Policy: Severity of Punishment, Certainty of Detection, Peer Behaviour, Normative Beliefs, Perceived Effectiveness and Moral Commitment. The research used a quantitative approach, descriptive. The data were collected through a questionnaire with 18 variables with a Likert scale of five points representing the influencing factors proposed by the theory. The sample was composed of 391 students entering the courses from the Center for Applied Social Sciences of the Universidade Federal do Rio Grande do Norte. For the data analysis, were adopted the techniques of Exploratory Factor Analysis, Analysis of Cluster hierarchical and nonhierarchical, Logistic Regression and Multiple Linear Regression. As main results, it is noteworthy that the factor severity of punishment is what contributes the most to the theoretical model and also influences the division of the sample between users more predisposed and less prone. As practical implication, the research model applied allows organizations to provide users less prone and, with them, to carry out actions of awareness and training directed and write Security Policies more effective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.

The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.

We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.

Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Se presentan los resultados de la aplicación de una metodología integradora de auditoría de información y conocimiento, llevada a cabo en un Centro de Investigación del Ministerio de Ciencia, Tecnología y Medio Ambiente de la provincia de Holguín, Cuba, conformada por siete etapas con un enfoque híbrido dirigida a revisar la estrategia y la política de gestión de información y conocimiento, identificar e inventariar y mapear los recursos de I+C y sus flujos, y valorar los procesos asociados a su gestión. La alta dirección de este centro, sus especialistas e investigadores manifestaron la efectividad de la metodología aplicada cuyos resultados propiciaron reajustar la proyección estratégica en relación con la gestión de la I+C, rediseñar los flujos informativos de los procesos claves, disponer de un directorio de sus expertos por áreas y planificar el futuro aprendizaje y desarrollo profesional.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Résumé: Le développement de l’industrie des polymères fourni de plus en plus de choix pour la formulation de matériaux pour les couvre-planchers. Les caoutchoucs, le PVC et le linoleum sont les polymères habituellement utilisés dans l’industrie des couvre-planchers. Ce projet répond à un problème de facilité de nettoyage des couvre-planchers de caoutchouc qui sont reconnus pour être mous, collants et ayant une surface rugueuse. L’INTRODUCTION couvrira l’état actuel de la recherche sur les couvre-planchers, surtout en regard au problème de la «nettoyabilité». La théorie pertinente et les informations générales sur les polymères, les composites polymériques et la science des surfaces seront introduites au CHAPITRE 1. Ensuite, le CHAPITRE 2 couvrira la méthode utilisée pour déterminer la nettoyabilité, l’évaluation des résultats ainsi que l’équipement utilise. Le CHAPITRE 3, discutera des premières expériences sur l’effet de la mouillabilité, la rugosité et la dureté sur la facilité de nettoyage des polymères purs. Plusieurs polymères ayant des surfaces plus ou moins hydrophobes seront investigués afin d’observer leur effet sur la nettoyabilité. L’effet de la rugosité sur la nettoyabilité sera investigué en imprimant une rugosité définie lors du moulage des échantillons; l’influence de la dureté sera également étudiée. Ensuite, un modèle de salissage/nettoyage sera établi à partir de nos résultats et observations afin de rationaliser les facteurs, ou « règles », qui détrminent la facilité de nettoyage des surfaces. Finalement, la réticulation au peroxyde sera étudiée comme une méthode de modification des polymères dans le but d’améliorer leur nettoyabilité; un mécanisme découlant des résultats de ces études sera présenté. Le CHAPITRE 4 étendra cette recherche aux mélanges de polymères; ces derniers servent habituellement à optimiser la performance des polymères purs. Dans ce chapitre, les mêmes tests discutés dans le CHAPITRE 3 seront utilisés pour vérifier le modèle de nettoyabilité établi ci-haut. De plus, l’influence de la non-miscibilité des mélanges de polymères sera discutée du point de vue de la thermodynamique (DSC) et de la morphologie (MEB). L’utilisation de la réticulation par peroxyde sera étudié dans les mélanges EPDM/ (E-ran-MAA(Zn)-ran-BuMA) afin d’améliorer la compatibilité de ces polymères. Les effets du dosage en agent de réticulation et du temps de cuisson seront également examinés. Finalement, un compatibilisant pré-réticulé a été développé pour les mélanges ternaires EPDM/ (E-ran-MAA(Zn)-ran-BuMA)/ HSR; son effet sur la nettoyabilité et sur la morphologie du mélange sera exposé.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’augmentation exponentielle de la demande de bande passante pour les communications laisse présager une saturation prochaine de la capacité des réseaux de télécommunications qui devrait se matérialiser au cours de la prochaine décennie. En effet, la théorie de l’information prédit que les effets non linéaires dans les fibres monomodes limite la capacité de transmission de celles-ci et peu de gain à ce niveau peut être espéré des techniques traditionnelles de multiplexage développées et utilisées jusqu’à présent dans les systèmes à haut débit. La dimension spatiale du canal optique est proposée comme un nouveau degré de liberté qui peut être utilisé pour augmenter le nombre de canaux de transmission et, par conséquent, résoudre cette menace de «crise de capacité». Ainsi, inspirée par les techniques micro-ondes, la technique émergente appelée multiplexage spatial (SDM) est une technologie prometteuse pour la création de réseaux optiques de prochaine génération. Pour réaliser le SDM dans les liens de fibres optiques, il faut réexaminer tous les dispositifs intégrés, les équipements et les sous-systèmes. Parmi ces éléments, l’amplificateur optique SDM est critique, en particulier pour les systèmes de transmission pour les longues distances. En raison des excellentes caractéristiques de l’amplificateur à fibre dopée à l’erbium (EDFA) utilisé dans les systèmes actuels de pointe, l’EDFA est à nouveau un candidat de choix pour la mise en œuvre des amplificateurs SDM pratiques. Toutefois, étant donné que le SDM introduit une variation spatiale du champ dans le plan transversal de la fibre, les amplificateurs à fibre dopée à l’erbium spatialement intégrés (SIEDFA) nécessitent une conception soignée. Dans cette thèse, nous examinons tout d’abord les progrès récents du SDM, en particulier les amplificateurs optiques SDM. Ensuite, nous identifions et discutons les principaux enjeux des SIEDFA qui exigent un examen scientifique. Suite à cela, la théorie des EDFA est brièvement présentée et une modélisation numérique pouvant être utilisée pour simuler les SIEDFA est proposée. Sur la base d’un outil de simulation fait maison, nous proposons une nouvelle conception des profils de dopage annulaire des fibres à quelques-modes dopées à l’erbium (ED-FMF) et nous évaluons numériquement la performance d’un amplificateur à un étage, avec fibre à dopage annulaire, à ainsi qu’un amplificateur à double étage pour les communications sur des fibres ne comportant que quelques modes. Par la suite, nous concevons des fibres dopées à l’erbium avec une gaine annulaire et multi-cœurs (ED-MCF). Nous avons évalué numériquement le recouvrement de la pompe avec les multiples cœurs de ces amplificateurs. En plus de la conception, nous fabriquons et caractérisons une fibre multi-cœurs à quelques modes dopées à l’erbium. Nous réalisons la première démonstration des amplificateurs à fibre optique spatialement intégrés incorporant de telles fibres dopées. Enfin, nous présentons les conclusions ainsi que les perspectives de cette recherche. La recherche et le développement des SIEDFA offriront d’énormes avantages non seulement pour les systèmes de transmission future SDM, mais aussi pour les systèmes de transmission monomode sur des fibres standards à un cœur car ils permettent de remplacer plusieurs amplificateurs par un amplificateur intégré.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although counterfactual thinking is typically activated by a negative outcome, it can have positive effects by helping to regulate and improve future behavior. Known as the content-specific pathway, these counterfactual ruminations use relevant information (i.e., information that is directly related to the problem at hand) to elicit insights about the problem, create a connection between the counterfactual and the desired behavior, and strengthen relevant behavioral intentions. The current research examines how changing the type of relevant information provided (i.e., so that it is either concrete and detailed or general and abstract) influences the relationship between counterfactual thinking and behavioral intentions. Experiments 1 and 2 found that counterfactual thinking facilitated relevant intentions when these statements involved detailed information (Experiment 1) or specific behaviors (Experiment 2) compared to general information (Experiment 1), categories of behavior, or traits (Experiment 2). Experiment 3 found that counterfactuals containing a category of behavior facilitated specific behavioral intentions, relative to counterfactuals focusing on a trait. However, counterfactuals only facilitated intentions that included specific behaviors, but not when intentions focused on categories of behaviors or traits (Experiment 4). Finally, this effect generalized to other relevant specific behaviors; a counterfactual based on one relevant specific behavior facilitated an intention based on another relevant specific behavior (Experiment 5). Together, these studies further clarify our understanding of the content-specific pathway and provide a more comprehensive understanding of functional counterfactual thinking.