969 resultados para dynamic adverse selection
Resumo:
The concepts involved with fractional calculus (FC) theory are applied in almost all areas of science and engineering. Its ability to yield superior modeling and control in many dynamical systems is well recognized. In this article, we will introduce the fundamental aspects associated with the application of FC to the control of dynamic systems.
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
ABSTRACT OBJECTIVE To describe different approaches to promote adverse drug reaction reporting among health care professionals, determining their cost-effectiveness. METHODS We analyzed and compared several approaches taken by the Northern Pharmacovigilance Centre (Portugal) to promote adverse drug reaction reporting. Approaches were compared regarding the number and relevance of adverse drug reaction reports obtained and costs involved. Costs by report were estimated by adding the initial costs and the running costs of each intervention. These costs were divided by the number of reports obtained with each intervention, to assess its cost-effectiveness. RESULTS All the approaches seem to have increased the number of adverse drug reaction reports. We noted the biggest increase with protocols (321 reports, costing 1.96 € each), followed by first educational approach (265 reports, 20.31 €/report) and by the hyperlink approach (136 reports, 15.59 €/report). Regarding the severity of adverse drug reactions, protocols were the most efficient approach, costing 2.29 €/report, followed by hyperlinks (30.28 €/report, having no running costs). Concerning unexpected adverse drug reactions, the best result was obtained with protocols (5.12 €/report), followed by first educational approach (38.79 €/report). CONCLUSIONS We recommend implementing protocols in other pharmacovigilance centers. They seem to be the most efficient intervention, allowing receiving adverse drug reactions reports at lower costs. The increase applied not only to the total number of reports, but also to the severity, unexpectedness and high degree of causality attributed to the adverse drug reactions. Still, hyperlinks have the advantage of not involving running costs, showing the second best performance in cost per adverse drug reactions report.
Resumo:
ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e telecomunicações
Resumo:
15th IEEE International Conference on Electronics, Circuits and Systems, Malta
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia da Manutenção
Resumo:
This work deals with the numerical simulation of air stripping process for the pre-treatment of groundwater used in human consumption. The model established in steady state presents an exponential solution that is used, together with the Tau Method, to get a spectral approach of the solution of the system of partial differential equations associated to the model in transient state.
Resumo:
All every day activities take place in space. And it is upon this that all information and knowledge revolve. The latter are the key elements in the organisation of territories. Their creation, use and distribution should therefore occur in a balanced way throughout the whole territory in order to allow all individuals to participate in an egalitarian society, in which the flow of knowledge can take precedence over the flow of interests. The information society depends, to a large extent, on the technological capacity to disseminate information and, consequently, the knowledge throughout territory, thereby creating conditions which allow a more balanced development, from the both the social and economic points of view thus avoiding the existence of info-exclusion territories. Internet should therefore be considered more than a mere technology, given that its importance goes well beyond the frontiers of culture and society. It is already a part of daily life and of the new forms of thinking and transmitting information, thus making it a basic necessity essential, for a full socio-economic development. Its role as a platform of creation and distribution of content is regarded as an indispensable element for education in today’s society, since it makes information a much more easily acquired benefit.”…in the same way that the new technologies of generation and distribution of energy allowed factories and large companies to establish themselves as the organisational bases of industrial society, so the internet today constitutes the technological base of the organisational form that characterises the Information Era: the network” (CASTELLS, 2004:15). The changes taking place today in regional and urban structures are increasingly more evident due to a combination of factors such as faster means of transport, more efficient telecommunications and other cheaper and more advanced technologies of information and knowledge. Although their impact on society is obvious, society itself also has a strong influence on the evolution of these technologies. And although physical distance has lost much of the responsibility it had towards explaining particular phenomena of the economy and of society, other aspects such as telecommunications, new forms of mobility, the networks of innovation, the internet, cyberspace, etc., have become more important, and are the subject of study and profound analysis. The science of geographical information, allows, in a much more rigorous way, the analysis of problems thus integrating in a much more balanced way, the concepts of place, of space and of time. Among the traditional disciplines that have already found their place in this process of research and analysis, we can give special attention to a geography of new spaces, which, while not being a geography of ‘innovation’, nor of the ‘Internet’, nor even ‘virtual’, which can be defined as one of the ‘Information Society’, encompassing not only the technological aspects but also including a socio-economic approach. According to the last European statistical data, Portugal shows a deficit in terms of information and knowledge dissemination among its European partners. Some of the causes are very well identified - low levels of scholarship, weak investments on innovation and R&D (both private and public sector) - but others seem to be hidden behind socio-economical and technological factors. So, the justification of Portugal as the case study appeared naturally, on a difficult quest to find the major causes to territorial asymmetries. The substantial amount of data needed for this work was very difficult to obtain and for the islands of Madeira and Azores was insufficient, so only Continental Portugal was considered for this study. In an effort to understand the various aspects of the Geography of the Information Society and bearing in mind the increasing generalised use of information technologies together with the range of technologies available for the dissemination of information, it is important to: (i) Reflect on the geography of the new socio-technological spaces. (ii) Evaluate the potential for the dissemination of information and knowledge through the selection of variables that allow us to determine the dynamic of a given territory or region; (iii) Define a Geography of the Information Society in Continental Portugal.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
The ability to solve conflicting beliefs is crucial for multi- agent systems where the information is dynamic, incomplete and dis- tributed over a group of autonomous agents. The proposed distributed belief revision approach consists of a distributed truth maintenance sy- stem and a set of autonomous belief revision methodologies. The agents have partial views and, frequently, hold disparate beliefs which are au- tomatically detected by system’s reason maintenance mechanism. The nature of these conflicts is dynamic and requires adequate methodolo- gies for conflict resolution. The two types of conflicting beliefs addressed in this paper are Context Dependent and Context Independent Conflicts which result, in the first case, from the assignment, by different agents, of opposite belief statuses to the same belief, and, in the latter case, from holding contradictory distinct beliefs. The belief revision methodology for solving Context Independent Con- flicts is, basically, a selection process based on the assessment of the cre- dibility of the opposing belief statuses. The belief revision methodology for solving Context Dependent Conflicts is, essentially, a search process for a consensual alternative based on a “next best” relaxation strategy.