937 resultados para INFORMATION AND COMPUTING SCIENCES


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation entitled "Tuning of magnetic exchange interactions between organic radicals through bond and space" comprises eight chapters. In the initial part of chapter 1, an overview of organic radicals and their applications were discussed and in the latter part motivation and objective of thesis was described. As the EPR spectroscopy is a necessary tool to study organic radicals, the basic principles of EPR spectroscopy were discussed in chapter 2. rnAntiferromagnetically coupled species can be considered as a source of interacting bosons. Consequently, such biradicals can serve as molecular models of a gas of magnetic excitations which can be used for quantum computing or quantum information processing. Notably, initial small triplet state population in weakly AF coupled biradicals can be switched into larger in the presence of applied magnetic field. Such biradical systems are promising molecular models for studying the phenomena of magnetic field-induced Bose-Einstein condensation in the solid state. To observe such phenomena it is very important to control the intra- as well as inter-molecular magnetic exchange interactions. Chapters 3 to 5 deals with the tuning of intra- and inter-molecular exchange interactions utilizing different approaches. Some of which include changing the length of π-spacer, introduction of functional groups, metal complex formation with diamagnetic metal ion, variation of radical moieties etc. During this study I came across two very interesting molecules 2,7-TMPNO and BPNO, which exist in semi-quinoid form and exhibits characteristic of the biradical and quinoid form simultaneously. The 2,7-TMPNO possesses the singlet-triplet energy gap of ΔEST = –1185 K. So it is nearly unrealistic to observe the magnetic field induced spin switching. So we studied the spin switching of this molecule by photo-excitation which was discussed in chapter 6. The structural similarity of BPNO with Tschitschibabin’s HC allowed us to dig the discrepancies related to ground state of Tschitschibabin’s hydrocarbon(Discussed in chapter 7). Finally, in chapter 8 the synthesis and characterization of a neutral paramagnetic HBC derivative (HBCNO) is discussed. The magneto liquid crystalline properties of HBCNO were studied by DSC and EPR spectroscopy.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Access to health care can be described along four dimensions: geographic accessibility, availability, financial accessibility and acceptability. Geographic accessibility measures how physically accessible resources are for the population, while availability reflects what resources are available and in what amount. Combining these two types of measure into a single index provides a measure of geographic (or spatial) coverage, which is an important measure for assessing the degree of accessibility of a health care network. Results This paper describes the latest version of AccessMod, an extension to the Geographical Information System ArcView 3.×, and provides an example of application of this tool. AccessMod 3 allows one to compute geographic coverage to health care using terrain information and population distribution. Four major types of analysis are available in AccessMod: (1) modeling the coverage of catchment areas linked to an existing health facility network based on travel time, to provide a measure of physical accessibility to health care; (2) modeling geographic coverage according to the availability of services; (3) projecting the coverage of a scaling-up of an existing network; (4) providing information for cost effectiveness analysis when little information about the existing network is available. In addition to integrating travelling time, population distribution and the population coverage capacity specific to each health facility in the network, AccessMod can incorporate the influence of landscape components (e.g. topography, river and road networks, vegetation) that impact travelling time to and from facilities. Topographical constraints can be taken into account through an anisotropic analysis that considers the direction of movement. We provide an example of the application of AccessMod in the southern part of Malawi that shows the influences of the landscape constraints and of the modes of transportation on geographic coverage. Conclusion By incorporating the demand (population) and the supply (capacities of heath care centers), AccessMod provides a unifying tool to efficiently assess the geographic coverage of a network of health care facilities. This tool should be of particular interest to developing countries that have a relatively good geographic information on population distribution, terrain, and health facility locations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Many users search the Internet for answers to health questions. Complementary and alternative medicine (CAM) is a particularly common search topic. Because many CAM therapies do not require a clinician's prescription, false or misleading CAM information may be more dangerous than information about traditional therapies. Many quality criteria have been suggested to filter out potentially harmful online health information. However, assessing the accuracy of CAM information is uniquely challenging since CAM is generally not supported by conventional literature. OBJECTIVE: The purpose of this study is to determine whether domain-independent technical quality criteria can identify potentially harmful online CAM content. METHODS: We analyzed 150 Web sites retrieved from a search for the three most popular herbs: ginseng, ginkgo and St. John's wort and their purported uses on the ten most commonly used search engines. The presence of technical quality criteria as well as potentially harmful statements (commissions) and vital information that should have been mentioned (omissions) was recorded. RESULTS: Thirty-eight sites (25%) contained statements that could lead to direct physical harm if acted upon. One hundred forty five sites (97%) had omitted information. We found no relationship between technical quality criteria and potentially harmful information. CONCLUSIONS: Current technical quality criteria do not identify potentially harmful CAM information online. Consumers should be warned to use other means of validation or to trust only known sites. Quality criteria that consider the uniqueness of CAM must be developed and validated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently more than half of Electronic Health Record (EHR) projects fail. Most of these failures are not due to flawed technology, but rather due to the lack of systematic considerations of human issues. Among the barriers for EHR adoption, function mismatching among users, activities, and systems is a major area that has not been systematically addressed from a human-centered perspective. A theoretical framework called Functional Framework was developed for identifying and reducing functional discrepancies among users, activities, and systems. The Functional Framework is composed of three models – the User Model, the Designer Model, and the Activity Model. The User Model was developed by conducting a survey (N = 32) that identified the functions needed and desired from the user’s perspective. The Designer Model was developed by conducting a systemic review of an Electronic Dental Record (EDR) and its functions. The Activity Model was developed using an ethnographic method called shadowing where EDR users (5 dentists, 5 dental assistants, 5 administrative personnel) were followed quietly and observed for their activities. These three models were combined to form a unified model. From the unified model the work domain ontology was developed by asking users to rate the functions (a total of 190 functions) in the unified model along the dimensions of frequency and criticality in a survey. The functional discrepancies, as indicated by the regions of the Venn diagrams formed by the three models, were consistent with the survey results, especially with user satisfaction. The survey for the Functional Framework indicated the preference of one system over the other (R=0.895). The results of this project showed that the Functional Framework provides a systematic method for identifying, evaluating, and reducing functional discrepancies among users, systems, and activities. Limitations and generalizability of the Functional Framework were discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To determine the prevalence of false or misleading statements in messages posted by internet cancer support groups and whether these statements were identified as false or misleading and corrected by other participants in subsequent postings. DESIGN: Analysis of content of postings. SETTING: Internet cancer support group Breast Cancer Mailing List. MAIN OUTCOME MEASURES: Number of false or misleading statements posted from 1 January to 23 April 2005 and whether these were identified and corrected by participants in subsequent postings. RESULTS: 10 of 4600 postings (0.22%) were found to be false or misleading. Of these, seven were identified as false or misleading by other participants and corrected within an average of four hours and 33 minutes (maximum, nine hours and nine minutes). CONCLUSIONS: Most posted information on breast cancer was accurate. Most false or misleading statements were rapidly corrected by participants in subsequent postings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract As librarians of the Social & Preventive Medicine Library in Bern, we help researchers perform systematic literature searches and teach students to use medical databases. We developed our skills mainly “on the job”, and we wondered how other health librarians in Europe were trained to become experts in searching. We had a great opportunity to “job shadow” specialists in this area of library service during a 5-day-internship at the Royal Free Hospital Medical Library in London, Great Britain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cancer is the second leading cause of death in the United States. With the advent of new technologies, changes in health care delivery, and multiplicity of provider types that patients must see, cancer care management has become increasingly complex. The availability of cancer health information has been shown to help cancer patients cope with the management and effects of their cancers. As a result, more cancer patients are using the internet to find resources that can aid in decision-making and recovery. ^ The Health Information National Trends Survey (HINTS) is a nationally representative survey designed to collect information about the experiences of cancer and non-cancer adults with health information sources. The HINTS survey focused on both conventional sources as well as newer technologies, particularly the internet. This study is a descriptive analysis of the HINTS 2003 and HINTS 2005 survey data. The purpose of the research is to explore the general trends in health information seeking and use by US adults, and especially by cancer patients. ^ From 2003 to 2005, internet use for various health-related activities appears to have increased among adults with and without cancer. Differences were found between the groups in the general trust in information media, particularly the internet. Non-cancer respondents tended to have greater trust in information media than cancer respondents. ^ The latter portion of this work examined characteristics of HINTS respondents that were thought to be relevant to how much trust individuals placed in the internet as a source of health information. Trust in health information from the internet was significantly greater among younger adults, higher-earning households, internet users, online seekers of health or cancer information, and those who found online cancer information useful. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Problem: Medical and veterinary students memorize facts but then have difficulty applying those facts in clinical problem solving. Cognitive engineering research suggests that the inability of medical and veterinary students to infer concepts from facts may be due in part to specific features of how information is represented and organized in educational materials. First, physical separation of pieces of information may increase the cognitive load on the student. Second, information that is necessary but not explicitly stated may also contribute to the student’s cognitive load. Finally, the types of representations – textual or graphical – may also support or hinder the student’s learning process. This may explain why students have difficulty applying biomedical facts in clinical problem solving. Purpose: To test the hypothesis that three specific aspects of expository text – the patial distance between the facts needed to infer a rule, the explicitness of information, and the format of representation – affected the ability of students to solve clinical problems. Setting: The study was conducted in the parasitology laboratory of a college of veterinary medicine in Texas. Sample: The study subjects were a convenience sample consisting of 132 second-year veterinary students who matriculated in 2007. The age of this class upon admission ranged from 20-52, and the gender makeup of this class consisted of approximately 75% females and 25% males. Results: No statistically significant difference in student ability to solve clinical problems was found when relevant facts were placed in proximity, nor when an explicit rule was stated. Further, no statistically significant difference in student ability to solve clinical problems was found when students were given different representations of material, including tables and concept maps. Findings: The findings from this study indicate that the three properties investigated – proximity, explicitness, and representation – had no statistically significant effect on student learning as it relates to clinical problem-solving ability. However, ad hoc observations as well as findings from other researchers suggest that the subjects were probably using rote learning techniques such as memorization, and therefore were not attempting to infer relationships from the factual material in the interventions, unless they were specifically prompted to look for patterns. A serendipitous finding unrelated to the study hypothesis was that those subjects who correctly answered questions regarding functional (non-morphologic) properties, such as mode of transmission and intermediate host, at the family taxonomic level were significantly more likely to correctly answer clinical case scenarios than were subjects who did not correctly answer questions regarding functional properties. These findings suggest a strong relationship (p < .001) between well-organized knowledge of taxonomic functional properties and clinical problem solving ability. Recommendations: Further study should be undertaken investigating the relationship between knowledge of functional taxonomic properties and clinical problem solving ability. In addition, the effect of prompting students to look for patterns in instructional material, followed by the effect of factors that affect cognitive load such as proximity, explicitness, and representation, should be explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitous computing software needs to be autonomous so that essential decisions such as how to configure its particular execution are self-determined. Moreover, data mining serves an important role for ubiquitous computing by providing intelligence to several types of ubiquitous computing applications. Thus, automating ubiquitous data mining is also crucial. We focus on the problem of automatically configuring the execution of a ubiquitous data mining algorithm. In our solution, we generate configuration decisions in a resource aware and context aware manner since the algorithm executes in an environment in which the context often changes and computing resources are often severely limited. We propose to analyze the execution behavior of the data mining algorithm by mining its past executions. By doing so, we discover the effects of resource and context states as well as parameter settings on the data mining quality. We argue that a classification model is appropriate for predicting the behavior of an algorithm?s execution and we concentrate on decision tree classifier. We also define taxonomy on data mining quality so that tradeoff between prediction accuracy and classification specificity of each behavior model that classifies by a different abstraction of quality, is scored for model selection. Behavior model constituents and class label transformations are formally defined and experimental validation of the proposed approach is also performed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, student dropout rates are a matter of concern among universities. Many research studies, aimed at discovering the causes, have been carried out. However, few solutions, that could serve all students and related problems, have been proposed so far. One such problem is caused by the lack of the "knowledge chain educational links" that occurs when students move onto higher studies without mastering their basic studies. Most regulated studies imparted at universities are designed so that some basic subjects serve as support for other, more complicated, subjects, thus forming a complicated knowledge network. When a link in this chain fails, student frustration occurs as it prevents him from fully understanding the following educational links. In this proposal we try to mitigate these disasters that stem, for the most part, the student?s frustration beyond his college stay. On one hand, we make a dissertation on the student?s learning process, which we divide into a series of phases that amount to what we call the "learning lifecycle." Also, we analyze at what stage the action by the stakeholders involved in this scenario: teachers and students; is most important. On the other hand, we consider that Information and Communication Technologies ICT, such as Cloud Computing, can help develop new ways, allowing for the teaching of higher education, while easing and facilitating the student?s learning process. But, methods and processes need to be defined as to direct the use of such technologies; in the teaching process in general, and within higher education in particular; in order to achieve optimum results. Our methodology integrates, as another actor, the ICT into the "Learning Lifecycle". We stimulate students to stop being merely spectators of their own education, and encourage them to take an active part in their training process. To do this, we offer a set of useful tools to determine not only academic failure causes, (for self assessment), but also to remedy these failures (with corrective actions); "discovered the causes it is easier to determine solutions?. We believe this study will be useful for both students and teachers. Students learn from their own experience and improve their learning process, while obtaining all of the "knowledge chain educational links? required in their studies. We stand by the motto "Studying to learn instead of studying to pass". Teachers will also be benefited by detecting where and how to strengthen their teaching proposals. All of this will also result in decreasing dropout rates.