905 resultados para pacs: expert systems and other ai software and techniques
Resumo:
El objeto de estudio de este proyecto son los sistemas de calentamiento de agua mediante energía solar que funcionan termosifónicamente. En particular se tratará con dos diseños particulares generados por fabricantes de la Provincia de Córdoba y que han solicitado el asesoramiento del Grupo de Energía Solar (GES) para el mejoramiento de la performance térmica de dichos equipos. Se trata de dos sistemas que tienen materiales no tradicionales y se diferencian además por tener una distinta disposición del tanque de almacenamiento: uno es en forma vertical y el otro en forma horizontal. Basados en los resultados de un ensayo bajo norma internacional, donde se detectaron algunas puntos factibles de mejora, se propone en este proyecto el análisis en detalle de los equipos, para lo cual se les debe desarmar completos, para realizar un estudio analítico y experimental de los mismos con el objeto de hacer un planteo teórico-analítico del comportamiento de los mismos, con la implementación de propuestas de mejora y chequeo de los resultados. Se propone entonces como objetivo lograr un mejoramiento de la performance térmica de los citados equipos a partir de un estudio experimental y analítico. Asumiendo esta posibilidad de mejora, se plantea la hipótesis de que es posible representar el funcionamiento de estos equipos mediante modelos físico-matemáticos desarrollados a partir de ecuaciones y correlaciones conocidas y procesos a interpretar mediante resoluciones numéricas y softwares específicos de simulación. De esta manera, se plantea el despieze completo de los equipos para estudiar en detalle su estructura y conexiones internas y a partir de la geometría, dimensiones y propiedades termofísicas de materiales constructivos y fluidos de trabajo, realizar modelos físico-matemáticos que permitan realizar variaciones de propiedades y geometría y así buscar las mejores combinaciones que produzcan equipos más eficientes térmicamente. Los modelos físico-matemáticos serán codificados en lenguajes de alto nivel para poder luego de una validación de los modelos, correr simulaciones en un software de reconocimiento internacional que permite sumar dichos modelos mediante un protocolo de comunicación, haciendo que las poderosas prestaciones del software se puedan aplicar a nuestros modelos. Se complementará el estudio con un análisis exergético para identificar los puntos críticos en que se producen las pérdidas de oportunidad de aprovechar la energía disponible, para así analizar cómo solucionar los problemas en dichos puntos. Los materiales a utilizar serán los propios equipos provistos por los fabricantes, que serán modificados convenientemente para operarlos como prototipos Se espera obtener un conocimiento acabado de los procesos y principios de funcionamiento de los equipos, que permita plantear las mejoras, las cuales se implementarán en los prototipos, realizándose una medición mediante norma igual a la inicial para ver en que magnitud se logran las mejoras esperadas. Se pretende además que las mejoras a implementar, en la etapa de transferencia a las empresas involucradas, redunden no sólo en un beneficio técnico, sino que también los sea desde el punto de vista económico. Para ello se trabajará también sobre los procesos y métodos de fabricación para que los equipos mejorados no sean mas caros que los originales y de ser posible sean aún más económicos, todo esto apuntando a la difusión de la energía solar térmica y poner al alcance de todos estos equipos tan convenientes para la propagación de las energías limpias. El proyecto redundará también en un importante beneficio para el conocimiento de la comunidad científica en general, con el aporte de nuevos resultados en diseños novedosos y con nuevos materiales. Además, la institución se beneficiará con la formación que obtendrán los integrantes del proyecto, muchos de ellos en etapa de realización de sus estudios de posgrado y en una etapa importante de su vida como investigadores. The main goal of this project is the improvement of two thermosyphonic solar water heating systems, made of non conventional materials and with different arrangement of their storage tanks: one is vertical and the other one horizontal. The thermosyphonic systems are provided by manufacturers of the Córdoba Province, who came to the Solar Energy Group (GES) of the National University of Río Cuarto looking for help for the design of their products. In an agreement with these manufacturers, it was proposed this project in order to work analytically and experimentally in order to obtain physical-mathematical models of these two systems, which allow for changes to look by means of simulations the best changes to implement on the equipments for the improvement of their thermal performance. Then, the materials to be used are the proper systems provided by the manufacturers, which will be disarmed to be studied in detail. After the analytical study the proposals of improvement will be implemented in a high level language of programming to perform simulations in the environment of a well-known software for energy simulations (TRNSYS). After the simulations, the best modifications will be physically implemented in the prototypes to perform finally the same normalized test of the beginning and check the magnitude of the implemented improvements. The importance of this project is based on the offer of better systems the companies would make, which would benefit the deployment of the thermal solar energy. Another relevant point is to make the new equipments at the same cost of the previous ones or cheaper, in order to achieve a good deployment of the solar water heating systems; then, the manufacture processes and methods must be studied to obtain not only good technical solutions, but also economical equipments. In addition, this project will contribute to the increasing of the knowledge in the area of thermosyphonic solar systems and the training of postgraduate students.
Resumo:
Background: The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties.Results: We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php.Conclusions: BIANA's approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules.
Resumo:
Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.
Resumo:
The basic idea behind improving local food security consists of two paths; first, accessibility (price, stock) and second, availability (quantity and biodiversity); both are perquisites to the provision of nutrients and a continuous food supply with locally available resources. The objectives of this thesis are to investigate if indigenous knowledge still plays an important role in traditional farming in the Minangkabau`s culture, thus supporting local food security. If the indigenous knowledge still plays a role in food culture in the Minangkabau`s culture which is linked to the matrilineal role and leads to a sound nutrition. Further, it should be tested if marantau influences traditional farming and food culture in Minangkabau`s, and if the local government plays a role in changing of traditional farming systems and food culture. Furthermore this thesis wants to prove if education and gender are playing a role in changing traditional farming system and food culture, and if the mass media affects traditional farming systems and food culture for the Minangkabau. The study was completed at four locations in West Sumatera; Nagari Ulakan (NU) (coastal area), Nagari Aia Batumbuak (NAB) (hilly area), Nagari Padang Laweh Malalo (NPLM) (lake area), Nagari Pandai Sikek (NPS) (hilly area). The rainfall ranged from 1400- 4800 mm annually with fertile soils. Data was collected by using PRA (Participatory Rural Appraisal) to investigate indigenous knowledge (IK) and its interactions, which is also combining with in depth-interview, life history, a survey using semi-structured-questionnaire, pictures, mapping, and expert interview. The data was collected from June - September 2009 and June 2010. The materials are; map of area, list of names, questionnaires, voices recorder, note book, and digital camera. The sampling method was snowball sampling which resulted in the qualitative and quantitative data taken. For qualitative data, ethnography and life history was used. For quantitative, a statistical survey with a semi-structured questionnaire was used. 50 respondents per each site participated voluntarily. Data was analyzed by performing MAXQDA 10, and F4 audio analysis software (created and developed by Philip-University Marburg). The data is clustered based on causality. The results show that; the role of IK on TFS (traditional farming system) shown on NPLM which has higher food crop biodiversity in comparison to the other three places even though it has relatively similar temperature and rainfall. This high food crop biodiversity is due to the awareness of local people who realized that they lived in unfavourable climate and topography; therefore they are more prepared for any changes that may occur. Carbohydrate intake is 100 % through rice even though they are growing different staple crops. Whereas most of the people said in the interviews that not eating rice is like not really eating for them. In addition to that, mothers still play an important role in kitchen activities. But when the agriculture income is low, mothers have to decide whether to change the meals or to feel insecure about their food supply. Marantau yields positive impact through the remittances it provides to invest on the farm. On the other hand, it results in fewer workers for agriculture, and therefore a negative impact on the transfer of IK. The investigation showed that the local government has a PTS (Padi Tanam Sabatang) programme which still does not guarantee that the farmers are getting sufficient revenue from their land. The low agricultural income leads to situation of potential food insecurity. It is evident that education is equal among men and women, but in some cases women tend to leave school earlier because of arranged marriages or the distances of school from their homes. Men predominantly work in agriculture and fishing, while women work in the kitchen. In NAB, even though women work on farmland they earn less then men. Weaving (NPS) and kitchen activity is recognized as women’s work, which also supports the household income. Mass media is not yielding any changes in TFS and food culture in these days. The traditional farming system has changed because of intensive agricultural extension which has introduced new methods of agriculture for the last three decades (since the 1980’s). There is no evidence that they want to change any of their food habits because of the mass media despite the lapau activity which allows them to get more food choices, instead preparing traditional meal at home. The recommendations of this thesis are: 1) The empowerment of farmers. It is regarding the self sufficient supply of manure, cooperative seed, and sustainable farm management. Farmers should know – where are they in their state of knowledge – so they can use their local wisdom and still collaborate with new sources of knowledge. Farmers should learn the prognosis of supply and demand next prior to harvest. There is a need for farm management guidelines; that can be adopted from both their local wisdom and modern knowledge. 2) Increase of non-agricultural income Increasing the non-agricultural income is strongly recommended. The remittances can be invested on non-agricultural jobs. 3) The empowerment of the mother. The mother plays an important role in farm to fork activities; the mother can be an initiator and promoter of cultivating spices in the backyard. Improvement of nutritional knowledge through information and informal public education can be done through arisan ibu-ibu and lapau activity. The challenges to apply these recommendations are: 1) The gap between institutions and organizations of local governments. There is more than one institution involved in food security policy. 2) Training and facilities for field extension agriculture (FEA) is needed because the rapid change of interaction between local government and farmer’s dependent on this agency.
Resumo:
The aim of this thesis is to narrow the gap between two different control techniques: the continuous control and the discrete event control techniques DES. This gap can be reduced by the study of Hybrid systems, and by interpreting as Hybrid systems the majority of large-scale systems. In particular, when looking deeply into a process, it is often possible to identify interaction between discrete and continuous signals. Hybrid systems are systems that have both continuous, and discrete signals. Continuous signals are generally supposed continuous and differentiable in time, since discrete signals are neither continuous nor differentiable in time due to their abrupt changes in time. Continuous signals often represent the measure of natural physical magnitudes such as temperature, pressure etc. The discrete signals are normally artificial signals, operated by human artefacts as current, voltage, light etc. Typical processes modelled as Hybrid systems are production systems, chemical process, or continuos production when time and continuous measures interacts with the transport, and stock inventory system. Complex systems as manufacturing lines are hybrid in a global sense. They can be decomposed into several subsystems, and their links. Another motivation for the study of Hybrid systems is the tools developed by other research domains. These tools benefit from the use of temporal logic for the analysis of several properties of Hybrid systems model, and use it to design systems and controllers, which satisfies physical or imposed restrictions. This thesis is focused in particular types of systems with discrete and continuous signals in interaction. That can be modelled hard non-linealities, such as hysteresis, jumps in the state, limit cycles, etc. and their possible non-deterministic future behaviour expressed by an interpretable model description. The Hybrid systems treated in this work are systems with several discrete states, always less than thirty states (it can arrive to NP hard problem), and continuous dynamics evolving with expression: with Ki ¡ Rn constant vectors or matrices for X components vector. In several states the continuous evolution can be several of them Ki = 0. In this formulation, the mathematics can express Time invariant linear system. By the use of this expression for a local part, the combination of several local linear models is possible to represent non-linear systems. And with the interaction with discrete events of the system the model can compose non-linear Hybrid systems. Especially multistage processes with high continuous dynamics are well represented by the proposed methodology. Sate vectors with more than two components, as third order models or higher is well approximated by the proposed approximation. Flexible belt transmission, chemical reactions with initial start-up and mobile robots with important friction are several physical systems, which profits from the benefits of proposed methodology (accuracy). The motivation of this thesis is to obtain a solution that can control and drive the Hybrid systems from the origin or starting point to the goal. How to obtain this solution, and which is the best solution in terms of one cost function subject to the physical restrictions and control actions is analysed. Hybrid systems that have several possible states, different ways to drive the system to the goal and different continuous control signals are problems that motivate this research. The requirements of the system on which we work is: a model that can represent the behaviour of the non-linear systems, and that possibilities the prediction of possible future behaviour for the model, in order to apply an supervisor which decides the optimal and secure action to drive the system toward the goal. Specific problems can be determined by the use of this kind of hybrid models are: - The unity of order. - Control the system along a reachable path. - Control the system in a safe path. - Optimise the cost function. - Modularity of control The proposed model solves the specified problems in the switching models problem, the initial condition calculus and the unity of the order models. Continuous and discrete phenomena are represented in Linear hybrid models, defined with defined eighth-tuple parameters to model different types of hybrid phenomena. Applying a transformation over the state vector : for LTI system we obtain from a two-dimensional SS a single parameter, alpha, which still maintains the dynamical information. Combining this parameter with the system output, a complete description of the system is obtained in a form of a graph in polar representation. Using Tagaki-Sugeno type III is a fuzzy model which include linear time invariant LTI models for each local model, the fuzzyfication of different LTI local model gives as a result a non-linear time invariant model. In our case the output and the alpha measure govern the membership function. Hybrid systems control is a huge task, the processes need to be guided from the Starting point to the desired End point, passing a through of different specific states and points in the trajectory. The system can be structured in different levels of abstraction and the control in three layers for the Hybrid systems from planning the process to produce the actions, these are the planning, the process and control layer. In this case the algorithms will be applied to robotics ¡V a domain where improvements are well accepted ¡V it is expected to find a simple repetitive processes for which the extra effort in complexity can be compensated by some cost reductions. It may be also interesting to implement some control optimisation to processes such as fuel injection, DC-DC converters etc. In order to apply the RW theory of discrete event systems on a Hybrid system, we must abstract the continuous signals and to project the events generated for these signals, to obtain new sets of observable and controllable events. Ramadge & Wonham¡¦s theory along with the TCT software give a Controllable Sublanguage of the legal language generated for a Discrete Event System (DES). Continuous abstraction transforms predicates over continuous variables into controllable or uncontrollable events, and modifies the set of uncontrollable, controllable observable and unobservable events. Continuous signals produce into the system virtual events, when this crosses the bound limits. If this event is deterministic, they can be projected. It is necessary to determine the controllability of this event, in order to assign this to the corresponding set, , controllable, uncontrollable, observable and unobservable set of events. Find optimal trajectories in order to minimise some cost function is the goal of the modelling procedure. Mathematical model for the system allows the user to apply mathematical techniques over this expression. These possibilities are, to minimise a specific cost function, to obtain optimal controllers and to approximate a specific trajectory. The combination of the Dynamic Programming with Bellman Principle of optimality, give us the procedure to solve the minimum time trajectory for Hybrid systems. The problem is greater when there exists interaction between adjacent states. In Hybrid systems the problem is to determine the partial set points to be applied at the local models. Optimal controller can be implemented in each local model in order to assure the minimisation of the local costs. The solution of this problem needs to give us the trajectory to follow the system. Trajectory marked by a set of set points to force the system to passing over them. Several ways are possible to drive the system from the Starting point Xi to the End point Xf. Different ways are interesting in: dynamic sense, minimum states, approximation at set points, etc. These ways need to be safe and viable and RchW. And only one of them must to be applied, normally the best, which minimises the proposed cost function. A Reachable Way, this means the controllable way and safe, will be evaluated in order to obtain which one minimises the cost function. Contribution of this work is a complete framework to work with the majority Hybrid systems, the procedures to model, control and supervise are defined and explained and its use is demonstrated. Also explained is the procedure to model the systems to be analysed for automatic verification. Great improvements were obtained by using this methodology in comparison to using other piecewise linear approximations. It is demonstrated in particular cases this methodology can provide best approximation. The most important contribution of this work, is the Alpha approximation for non-linear systems with high dynamics While this kind of process is not typical, but in this case the Alpha approximation is the best linear approximation to use, and give a compact representation.
Resumo:
Objectives: To compare simulated periodontal bone defect depth measured in digital radiographs with dedicated and non-dedicated software systems and to compare the depth measurements from each program with the measurements in dry mandibles.Methods: Forty periodontal bone defects were created at the proximal area of the first premolar in dry pig mandibles. Measurements of the defects were performed with a periodontal probe in the dry mandible. Periapical digital radiographs of the defects were recorded using the Schick sensor in a standardized exposure setting. All images were read using a Schick dedicated software system (CDR DICOM for Windows v.3.5), and three commonly available non-dedicated software systems (Vix Win 2000 v.1.2; Adobe Photoshop 7.0 and Image Tool 3.0). The defects were measured three times in each image and a consensus was reached among three examiners using the four software systems. The difference between the radiographic measurements was analysed using analysis of variance (ANOVA) and by comparing the measurements from each software system with the dry mandibles measurements using Student's t-test.Results: the mean values of the bone defects measured in the radiographs were 5.07 rum, 5.06 rum, 5.01 mm and 5.11 mm for CDR Digital Image and Communication in Medicine (DICOM) for Windows, Vix Win, Adobe Photoshop, and Image Tool, respectively, and 6.67 mm for the dry mandible. The means of the measurements performed in the four software systems were not significantly different, ANOVA (P = 0.958). A significant underestimation of defect depth was obtained when we compared the mean depths from each software system with the dry mandible measurements (t-test; P congruent to 0.000).Conclusions: the periodontal bone defect measurements in dedicated and in three non-dedicated software systems were not significantly different, but they all underestimated the measurements when compared with the measurements obtained in the dry mandibles.
Resumo:
This work contributes to the ongoing debate on the productivity paradox by considering CIOs’ perceptions of IT business value. Applying regression analysis to data from an international survey, we study how the adoption of certain types of enterprise software affects the CIOs’ perception of the impact of IT on the firm’s business activities and vice versa. Other potentially important factors such as country, sector and size of the firms are also taken into account. Our results indicate a more significant support for the impact of perceived IT benefits on adoption of enterprise software than vice versa. CIOs based in the US perceive IT benefits more strongly than their German counterparts. Furthermore, certain types of enterprise software seem to be more prevalent in the US.
Resumo:
This report addresses speculative parallelism (the assignment of spare processing resources to tasks which are not known to be strictly required for the successful completion of a computation) at the user and application level. At this level, the execution of a program is seen as a (dynamic) tree —a graph, in general. A solution for a problem is a traversal of this graph from the initial state to a node known to be the answer. Speculative parallelism then represents the assignment of resources to múltiple branches of this graph even if they are not positively known to be on the path to a solution. In highly non-deterministic programs the branching factor can be very high and a naive assignment will very soon use up all the resources. This report presents work assignment strategies other than the usual depth-first and breadth-first. Instead, best-first strategies are used. Since their definition is application-dependent, the application language contains primitives that allow the user (or application programmer) to a) indícate when intelligent OR-parallelism should be used; b) provide the functions that define "best," and c) indícate when to use them. An abstract architecture enables those primitives to perform the search in a "speculative" way, using several processors, synchronizing them, killing the siblings of the path leading to the answer, etc. The user is freed from worrying about these interactions. Several search strategies are proposed and their implementation issues are addressed. "Armageddon," a global pruning method, is introduced, together with both a software and a hardware implementation for it. The concepts exposed are applicable to áreas of Artificial Intelligence such as extensive expert systems, planning, game playing, and in general to large search problems. The proposed strategies, although showing promise, have not been evaluated by simulation or experimentation.
Resumo:
Entendemos por inteligencia colectiva una forma de inteligencia que surge de la colaboración y la participación de varios individuos o, siendo más estrictos, varias entidades. En base a esta sencilla definición podemos observar que este concepto es campo de estudio de las más diversas disciplinas como pueden ser la sociología, las tecnologías de la información o la biología, atendiendo cada una de ellas a un tipo de entidades diferentes: seres humanos, elementos de computación o animales. Como elemento común podríamos indicar que la inteligencia colectiva ha tenido como objetivo el ser capaz de fomentar una inteligencia de grupo que supere a la inteligencia individual de las entidades que lo forman a través de mecanismos de coordinación, cooperación, competencia, integración, diferenciación, etc. Sin embargo, aunque históricamente la inteligencia colectiva se ha podido desarrollar de forma paralela e independiente en las distintas disciplinas que la tratan, en la actualidad, los avances en las tecnologías de la información han provocado que esto ya no sea suficiente. Hoy en día seres humanos y máquinas a través de todo tipo de redes de comunicación e interfaces, conviven en un entorno en el que la inteligencia colectiva ha cobrado una nueva dimensión: ya no sólo puede intentar obtener un comportamiento superior al de sus entidades constituyentes sino que ahora, además, estas inteligencias individuales son completamente diferentes unas de otras y aparece por lo tanto el doble reto de ser capaces de gestionar esta gran heterogeneidad y al mismo tiempo ser capaces de obtener comportamientos aún más inteligentes gracias a las sinergias que los distintos tipos de inteligencias pueden generar. Dentro de las áreas de trabajo de la inteligencia colectiva existen varios campos abiertos en los que siempre se intenta obtener unas prestaciones superiores a las de los individuos. Por ejemplo: consciencia colectiva, memoria colectiva o sabiduría colectiva. Entre todos estos campos nosotros nos centraremos en uno que tiene presencia en la práctica totalidad de posibles comportamientos inteligentes: la toma de decisiones. El campo de estudio de la toma de decisiones es realmente amplio y dentro del mismo la evolución ha sido completamente paralela a la que citábamos anteriormente en referencia a la inteligencia colectiva. En primer lugar se centró en el individuo como entidad decisoria para posteriormente desarrollarse desde un punto de vista social, institucional, etc. La primera fase dentro del estudio de la toma de decisiones se basó en la utilización de paradigmas muy sencillos: análisis de ventajas e inconvenientes, priorización basada en la maximización de algún parámetro del resultado, capacidad para satisfacer los requisitos de forma mínima por parte de las alternativas, consultas a expertos o entidades autorizadas o incluso el azar. Sin embargo, al igual que el paso del estudio del individuo al grupo supone una nueva dimensión dentro la inteligencia colectiva la toma de decisiones colectiva supone un nuevo reto en todas las disciplinas relacionadas. Además, dentro de la decisión colectiva aparecen dos nuevos frentes: los sistemas de decisión centralizados y descentralizados. En el presente proyecto de tesis nos centraremos en este segundo, que es el que supone una mayor atractivo tanto por las posibilidades de generar nuevo conocimiento y trabajar con problemas abiertos actualmente así como en lo que respecta a la aplicabilidad de los resultados que puedan obtenerse. Ya por último, dentro del campo de los sistemas de decisión descentralizados existen varios mecanismos fundamentales que dan lugar a distintas aproximaciones a la problemática propia de este campo. Por ejemplo el liderazgo, la imitación, la prescripción o el miedo. Nosotros nos centraremos en uno de los más multidisciplinares y con mayor capacidad de aplicación en todo tipo de disciplinas y que, históricamente, ha demostrado que puede dar lugar a prestaciones muy superiores a otros tipos de mecanismos de decisión descentralizados: la confianza y la reputación. Resumidamente podríamos indicar que confianza es la creencia por parte de una entidad que otra va a realizar una determinada actividad de una forma concreta. En principio es algo subjetivo, ya que la confianza de dos entidades diferentes sobre una tercera no tiene porqué ser la misma. Por otro lado, la reputación es la idea colectiva (o evaluación social) que distintas entidades de un sistema tiene sobre otra entidad del mismo en lo que respecta a un determinado criterio. Es por tanto una información de carácter colectivo pero única dentro de un sistema, no asociada a cada una de las entidades del sistema sino por igual a todas ellas. En estas dos sencillas definiciones se basan la inmensa mayoría de sistemas colectivos. De hecho muchas disertaciones indican que ningún tipo de organización podría ser viable de no ser por la existencia y la utilización de los conceptos de confianza y reputación. A partir de ahora, a todo sistema que utilice de una u otra forma estos conceptos lo denominaremos como sistema de confianza y reputación (o TRS, Trust and Reputation System). Sin embargo, aunque los TRS son uno de los aspectos de nuestras vidas más cotidianos y con un mayor campo de aplicación, el conocimiento que existe actualmente sobre ellos no podría ser más disperso. Existen un gran número de trabajos científicos en todo tipo de áreas de conocimiento: filosofía, psicología, sociología, economía, política, tecnologías de la información, etc. Pero el principal problema es que no existe una visión completa de la confianza y reputación en su sentido más amplio. Cada disciplina focaliza sus estudios en unos aspectos u otros dentro de los TRS, pero ninguna de ellas trata de explotar el conocimiento generado en el resto para mejorar sus prestaciones en su campo de aplicación concreto. Aspectos muy detallados en algunas áreas de conocimiento son completamente obviados por otras, o incluso aspectos tratados por distintas disciplinas, al ser estudiados desde distintos puntos de vista arrojan resultados complementarios que, sin embargo, no son aprovechados fuera de dichas áreas de conocimiento. Esto nos lleva a una dispersión de conocimiento muy elevada y a una falta de reutilización de metodologías, políticas de actuación y técnicas de una disciplina a otra. Debido su vital importancia, esta alta dispersión de conocimiento se trata de uno de los principales problemas que se pretenden resolver con el presente trabajo de tesis. Por otro lado, cuando se trabaja con TRS, todos los aspectos relacionados con la seguridad están muy presentes ya que muy este es un tema vital dentro del campo de la toma de decisiones. Además también es habitual que los TRS se utilicen para desempeñar responsabilidades que aportan algún tipo de funcionalidad relacionada con el mundo de la seguridad. Por último no podemos olvidar que el acto de confiar está indefectiblemente unido al de delegar una determinada responsabilidad, y que al tratar estos conceptos siempre aparece la idea de riesgo, riesgo de que las expectativas generadas por el acto de la delegación no se cumplan o se cumplan de forma diferente. Podemos ver por lo tanto que cualquier sistema que utiliza la confianza para mejorar o posibilitar su funcionamiento, por su propia naturaleza, es especialmente vulnerable si las premisas en las que se basa son atacadas. En este sentido podemos comprobar (tal y como analizaremos en más detalle a lo largo del presente documento) que las aproximaciones que realizan las distintas disciplinas que tratan la violación de los sistemas de confianza es de lo más variado. únicamente dentro del área de las tecnologías de la información se ha intentado utilizar alguno de los enfoques de otras disciplinas de cara a afrontar problemas relacionados con la seguridad de TRS. Sin embargo se trata de una aproximación incompleta y, normalmente, realizada para cumplir requisitos de aplicaciones concretas y no con la idea de afianzar una base de conocimiento más general y reutilizable en otros entornos. Con todo esto en cuenta, podemos resumir contribuciones del presente trabajo de tesis en las siguientes. • La realización de un completo análisis del estado del arte dentro del mundo de la confianza y la reputación que nos permite comparar las ventajas e inconvenientes de las diferentes aproximación que se realizan a estos conceptos en distintas áreas de conocimiento. • La definición de una arquitectura de referencia para TRS que contempla todas las entidades y procesos que intervienen en este tipo de sistemas. • La definición de un marco de referencia para analizar la seguridad de TRS. Esto implica tanto identificar los principales activos de un TRS en lo que respecta a la seguridad, así como el crear una tipología de posibles ataques y contramedidas en base a dichos activos. • La propuesta de una metodología para el análisis, el diseño, el aseguramiento y el despliegue de un TRS en entornos reales. Adicionalmente se exponen los principales tipos de aplicaciones que pueden obtenerse de los TRS y los medios para maximizar sus prestaciones en cada una de ellas. • La generación de un software que permite simular cualquier tipo de TRS en base a la arquitectura propuesta previamente. Esto permite evaluar las prestaciones de un TRS bajo una determinada configuración en un entorno controlado previamente a su despliegue en un entorno real. Igualmente es de gran utilidad para evaluar la resistencia a distintos tipos de ataques o mal-funcionamientos del sistema. Además de las contribuciones realizadas directamente en el campo de los TRS, hemos realizado aportaciones originales a distintas áreas de conocimiento gracias a la aplicación de las metodologías de análisis y diseño citadas con anterioridad. • Detección de anomalías térmicas en Data Centers. Hemos implementado con éxito un sistema de deteción de anomalías térmicas basado en un TRS. Comparamos la detección de prestaciones de algoritmos de tipo Self-Organized Maps (SOM) y Growing Neural Gas (GNG). Mostramos como SOM ofrece mejores resultados para anomalías en los sistemas de refrigeración de la sala mientras que GNG es una opción más adecuada debido a sus tasas de detección y aislamiento para casos de anomalías provocadas por una carga de trabajo excesiva. • Mejora de las prestaciones de recolección de un sistema basado en swarm computing y odometría social. Gracias a la implementación de un TRS conseguimos mejorar las capacidades de coordinación de una red de robots autónomos distribuidos. La principal contribución reside en el análisis y la validación de las mejoras increméntales que pueden conseguirse con la utilización apropiada de la información existente en el sistema y que puede ser relevante desde el punto de vista de un TRS, y con la implementación de algoritmos de cálculo de confianza basados en dicha información. • Mejora de la seguridad de Wireless Mesh Networks contra ataques contra la integridad, la confidencialidad o la disponibilidad de los datos y / o comunicaciones soportadas por dichas redes. • Mejora de la seguridad de Wireless Sensor Networks contra ataques avanzamos, como insider attacks, ataques desconocidos, etc. Gracias a las metodologías presentadas implementamos contramedidas contra este tipo de ataques en entornos complejos. En base a los experimentos realizados, hemos demostrado que nuestra aproximación es capaz de detectar y confinar varios tipos de ataques que afectan a los protocoles esenciales de la red. La propuesta ofrece unas velocidades de detección muy altas así como demuestra que la inclusión de estos mecanismos de actuación temprana incrementa significativamente el esfuerzo que un atacante tiene que introducir para comprometer la red. Finalmente podríamos concluir que el presente trabajo de tesis supone la generación de un conocimiento útil y aplicable a entornos reales, que nos permite la maximización de las prestaciones resultantes de la utilización de TRS en cualquier tipo de campo de aplicación. De esta forma cubrimos la principal carencia existente actualmente en este campo, que es la falta de una base de conocimiento común y agregada y la inexistencia de una metodología para el desarrollo de TRS que nos permita analizar, diseñar, asegurar y desplegar TRS de una forma sistemática y no artesanal y ad-hoc como se hace en la actualidad. ABSTRACT By collective intelligence we understand a form of intelligence that emerges from the collaboration and competition of many individuals, or strictly speaking, many entities. Based on this simple definition, we can see how this concept is the field of study of a wide range of disciplines, such as sociology, information science or biology, each of them focused in different kinds of entities: human beings, computational resources, or animals. As a common factor, we can point that collective intelligence has always had the goal of being able of promoting a group intelligence that overcomes the individual intelligence of the basic entities that constitute it. This can be accomplished through different mechanisms such as coordination, cooperation, competence, integration, differentiation, etc. Collective intelligence has historically been developed in a parallel and independent way among the different disciplines that deal with it. However, this is not enough anymore due to the advances in information technologies. Nowadays, human beings and machines coexist in environments where collective intelligence has taken a new dimension: we yet have to achieve a better collective behavior than the individual one, but now we also have to deal with completely different kinds of individual intelligences. Therefore, we have a double goal: being able to deal with this heterogeneity and being able to get even more intelligent behaviors thanks to the synergies that the different kinds of intelligence can generate. Within the areas of collective intelligence there are several open topics where they always try to get better performances from groups than from the individuals. For example: collective consciousness, collective memory, or collective wisdom. Among all these topics we will focus on collective decision making, that has influence in most of the collective intelligent behaviors. The field of study of decision making is really wide, and its evolution has been completely parallel to the aforementioned collective intelligence. Firstly, it was focused on the individual as the main decision-making entity, but later it became involved in studying social and institutional groups as basic decision-making entities. The first studies within the decision-making discipline were based on simple paradigms, such as pros and cons analysis, criteria prioritization, fulfillment, following orders, or even chance. However, in the same way that studying the community instead of the individual meant a paradigm shift within collective intelligence, collective decision-making means a new challenge for all the related disciplines. Besides, two new main topics come up when dealing with collective decision-making: centralized and decentralized decision-making systems. In this thesis project we focus in the second one, because it is the most interesting based on the opportunities to generate new knowledge and deal with open issues in this area, as well as these results can be put into practice in a wider set of real-life environments. Finally, within the decentralized collective decision-making systems discipline, there are several basic mechanisms that lead to different approaches to the specific problems of this field, for example: leadership, imitation, prescription, or fear. We will focus on trust and reputation. They are one of the most multidisciplinary concepts and with more potential for applying them in every kind of environments. Besides, they have historically shown that they can generate better performance than other decentralized decision-making mechanisms. Shortly, we say trust is the belief of one entity that the outcome of other entities’ actions is going to be in a specific way. It is a subjective concept because the trust of two different entities in another one does not have to be the same. Reputation is the collective idea (or social evaluation) that a group of entities within a system have about another entity based on a specific criterion. Thus, it is a collective concept in its origin. It is important to say that the behavior of most of the collective systems are based on these two simple definitions. In fact, a lot of articles and essays describe how any organization would not be viable if the ideas of trust and reputation did not exist. From now on, we call Trust an Reputation System (TRS) to any kind of system that uses these concepts. Even though TRSs are one of the most common everyday aspects in our lives, the existing knowledge about them could not be more dispersed. There are thousands of scientific works in every field of study related to trust and reputation: philosophy, psychology, sociology, economics, politics, information sciences, etc. But the main issue is that a comprehensive vision of trust and reputation for all these disciplines does not exist. Every discipline focuses its studies on a specific set of topics but none of them tries to take advantage of the knowledge generated in the other disciplines to improve its behavior or performance. Detailed topics in some fields are completely obviated in others, and even though the study of some topics within several disciplines produces complementary results, these results are not used outside the discipline where they were generated. This leads us to a very high knowledge dispersion and to a lack in the reuse of methodologies, policies and techniques among disciplines. Due to its great importance, this high dispersion of trust and reputation knowledge is one of the main problems this thesis contributes to solve. When we work with TRSs, all the aspects related to security are a constant since it is a vital aspect within the decision-making systems. Besides, TRS are often used to perform some responsibilities related to security. Finally, we cannot forget that the act of trusting is invariably attached to the act of delegating a specific responsibility and, when we deal with these concepts, the idea of risk is always present. This refers to the risk of generated expectations not being accomplished or being accomplished in a different way we anticipated. Thus, we can see that any system using trust to improve or enable its behavior, because of its own nature, is especially vulnerable if the premises it is based on are attacked. Related to this topic, we can see that the approaches of the different disciplines that study attacks of trust and reputation are very diverse. Some attempts of using approaches of other disciplines have been made within the information science area of knowledge, but these approaches are usually incomplete, not systematic and oriented to achieve specific requirements of specific applications. They never try to consolidate a common base of knowledge that could be reusable in other context. Based on all these ideas, this work makes the following direct contributions to the field of TRS: • The compilation of the most relevant existing knowledge related to trust and reputation management systems focusing on their advantages and disadvantages. • We define a generic architecture for TRS, identifying the main entities and processes involved. • We define a generic security framework for TRS. We identify the main security assets and propose a complete taxonomy of attacks for TRS. • We propose and validate a methodology to analyze, design, secure and deploy TRS in real-life environments. Additionally we identify the principal kind of applications we can implement with TRS and how TRS can provide a specific functionality. • We develop a software component to validate and optimize the behavior of a TRS in order to achieve a specific functionality or performance. In addition to the contributions made directly to the field of the TRS, we have made original contributions to different areas of knowledge thanks to the application of the analysis, design and security methodologies previously presented: • Detection of thermal anomalies in Data Centers. Thanks to the application of the TRS analysis and design methodologies, we successfully implemented a thermal anomaly detection system based on a TRS.We compare the detection performance of Self-Organized- Maps and Growing Neural Gas algorithms. We show how SOM provides better results for Computer Room Air Conditioning anomaly detection, yielding detection rates of 100%, in training data with malfunctioning sensors. We also show that GNG yields better detection and isolation rates for workload anomaly detection, reducing the false positive rate when compared to SOM. • Improving the performance of a harvesting system based on swarm computing and social odometry. Through the implementation of a TRS, we achieved to improve the ability of coordinating a distributed network of autonomous robots. The main contribution lies in the analysis and validation of the incremental improvements that can be achieved with proper use information that exist in the system and that are relevant for the TRS, and the implementation of the appropriated trust algorithms based on such information. • Improving Wireless Mesh Networks security against attacks against the integrity, confidentiality or availability of data and communications supported by these networks. Thanks to the implementation of a TRS we improved the detection time rate against these kind of attacks and we limited their potential impact over the system. • We improved the security of Wireless Sensor Networks against advanced attacks, such as insider attacks, unknown attacks, etc. Thanks to the TRS analysis and design methodologies previously described, we implemented countermeasures against such attacks in a complex environment. In our experiments we have demonstrated that our system is capable of detecting and confining various attacks that affect the core network protocols. We have also demonstrated that our approach is capable of rapid attack detection. Also, it has been proven that the inclusion of the proposed detection mechanisms significantly increases the effort the attacker has to introduce in order to compromise the network. Finally we can conclude that, to all intents and purposes, this thesis offers a useful and applicable knowledge in real-life environments that allows us to maximize the performance of any system based on a TRS. Thus, we deal with the main deficiency of this discipline: the lack of a common and complete base of knowledge and the lack of a methodology for the development of TRS that allow us to analyze, design, secure and deploy TRS in a systematic way.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.
Resumo:
Health and safety policies may be regarded as the cornerstone for positive prevention of occupational accidents and diseases. The Health and Safety at Work, etc Act 1974 makes it a legal duty for employers to prepare and revise a written statement of a general policy with respect to the health and safety at work of employees as well as the organisation and arrangements for carrying out that policy. Despite their importance and the legal equipment to prepare them, health and safety policies have been found, in a large number of plastics processing companies (particularly small companies), to be poorly prepared, inadequately implemented and monitored. An important cause of these inadequacies is the lack of necessary health and safety knowledge and expertise to prepare, implement and monitor policies. One possible way of remedying this problem is to investigate the feasibility of using computers to develop expert system programs to simulate the health and safety (HS) experts' task of preparing the policies and assisting companies implement and monitor them. Such programs use artificial intelligence (AI) techniques to solve this sort of problems which are heuristic in nature and require symbolic reasoning. Expert systems have been used successfully in a variety of fields such as medicine and engineering. An important phase in the feasibility of development of such systems is the engineering of knowledge which consists of identifying the knowledge required, eliciting, structuring and representing it in an appropriate computer programming language.
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.
Resumo:
Since the introduction of fiber reinforced polymers (FRP) for the repair and retrofit of concrete structures in the 1980’s, considerable research has been devoted to the feasibility of their application and predictive modeling of their performance. However, the effects of flaws present in the constitutive components and the practices in substrate preparation and treatment have not yet been thoroughly studied. This research aims at investigating the effect of surface preparation and treatment for the pre-cured FRP systems and the groove size tolerance for near surface mounted (NSM) FRP systems; and to set thresholds for guaranteed system performance. This study was conducted as part of the National Cooperative Highway Research Program (NCHRP) Project 10-59B to develop construction specifications and process control manual for repair and retrofit of concrete structures using bonded FRP systems. The research included both analytical and experimental components. The experimental program for the pre-cured FRP systems consisted of a total of twenty-four (24) reinforced concrete (RC) T-beams with various surface preparation parameters and surface flaws, including roughness, flatness, voids and cracks (cuts). For the NSM FRP systems, a total of twelve (12) additional RC T-beams were tested with different grooves sizes for FRP bars and strips. The analytical program included developing an elaborate nonlinear finite element model using the general purpose software ANSYS. The bond interface between FRP and concrete was modeled by a series of nonlinear springs. The model was validated against test data from the present study as well as those available from the literature. The model was subsequently used to extend the experimental range of parameters for surface flatness in pre-cured FRP systems and for groove size study in the NSM FRP systems. Test results, confirmed by further analyses, indicated that contrary to the general belief in the industry, the impact of surface roughness on the global performance of pre-cured FRP systems was negligible. The study also verified that threshold limits set for wet lay-up FRP systems can be extended to pre-cured systems. The study showed that larger surface voids and cracks (cuts) can adversely impact both the strength and ductility of pre-cured FRP systems. On the other hand, frequency (or spacing) of surface cracks (cuts) may only affect system ductility rather than its strength. Finally, within the range studied, groove size tolerance of ±1/8 in. does not appear to have an adverse effect on the performance of NSM FRP systems.
Resumo:
Purpose. The Internet has provided an unprecedented opportunity for psychotropic medication consumers, a traditionally silenced group in clinical trial research, to have voice by contributing to the construction of drug knowledge in an immediate, direct manner. Currently, there are no systematic appraisals of the potential of online consumer drug reviews to contribute to drug knowledge. The purpose of this research was to explore the content of drug information on various websites representing themselves as consumer- and expert-constructed, and as a practical consideration, to examine how each source may help and hinder treatment decision-making.^ Methodology. A mixed-methods research strategy utilizing a grounded theory approach was used to analyze drug information on 5 exemplar websites (3 consumer- and 2 expertconstructed) for 2 popularly prescribed psychotropic drugs (escitalopram and quetiapine). A stratified simple random sample was used to select 1,080 consumer reviews from the websites (N=7,114) through February 2009. Text was coded using QDA Miner 3.2 software by Provalis Research. A combination of frequency tables, descriptive excerpts from text, and chi-square tests for association were used throughout analyses.^ Findings. The most frequently mentioned effects by consumers taking either drug were related to psychological/behavioral symptoms and sleep. Consumers reported many of the same effects as found on expert health sites, but provided more descriptive language and situational examples. Expert labels of less serious on certain effects were not congruent with the sometimes tremendous burden described by consumers. Consumers mentioned more than double the themes mentioned in expert text, and demonstrated a diversity and range of discourses around those themes.^ Conclusions. Drug effects from each source were complete relative to the information provided in the other, but each also offered distinct advantages. Expert health sites provided concise summaries of medications’ effects, while consumer reviews had the added advantage of concrete descriptions and greater context. In short, consumer reviews better prepared potential consumers for what it’s like to take psychotropic drugs. Both sources of information benefit clinicians and consumers in making informed treatment-related decisions. Social work practitioners are encouraged to thoughtfully utilize online consumer drug reviews as a legitimate additional source for assisting clients in learning about treatment options.^
Resumo:
Since the introduction of fiber reinforced polymers (FRP) for the repair and retrofit of concrete structures in the 1980’s, considerable research has been devoted to the feasibility of their application and predictive modeling of their performance. However, the effects of flaws present in the constitutive components and the practices in substrate preparation and treatment have not yet been thoroughly studied. This research aims at investigating the effect of surface preparation and treatment for the pre-cured FRP systems and the groove size tolerance for near surface mounted (NSM) FRP systems; and to set thresholds for guaranteed system performance. The research included both analytical and experimental components. The experimental program for the pre-cured FRP systems consisted of a total of twenty-four (24) reinforced concrete (RC) T-beams with various surface preparation parameters and surface flaws, including roughness, flatness, voids and cracks (cuts). For the NSM FRP systems, a total of twelve (12) additional RC T-beams were tested with different grooves sizes for FRP bars and strips. The analytical program included developing an elaborate nonlinear finite element model using the general purpose software ANSYS. The model was subsequently used to extend the experimental range of parameters for surface flatness in pre-cured FRP systems, and for groove size study in the NSM FRP systems. Test results, confirmed by further analyses, indicated that contrary to the general belief in the industry, the impact of surface roughness on the global performance of pre-cured FRP systems was negligible. The study also verified that threshold limits set for wet lay-up FRP systems can be extended to pre-cured systems. The study showed that larger surface voids and cracks (cuts) can adversely impact both the strength and ductility of pre-cured FRP systems. On the other hand, frequency (or spacing) of surface cracks (cuts) may only affect system ductility rather than its strength. Finally, within the range studied, groove size tolerance of +1/8 in. does not appear to have an adverse effect on the performance of NSM FRP systems.