25 resultados para Apriori


Relevância:

20.00% 20.00%

Publicador:

Resumo:

O problema da elicitação de uma distribuição apriori é de primordial importância para a metodologia da Inferência Bayesiaana. Tomando a corrente subjectivista como forma mais consequente de “navegar” dentro da esfera Bayesiana teria o máximo interesse estruturar o problema da elicitação num quadro onde as credibilidades do indivíduo tivessem o relevo que lhe é devido no plano conceptual.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presents a technique for incorporating a priori knowledge from a state space system into a neural network training algorithm. The training algorithm considered is that of chemotaxis and the networks being trained are recurrent neural networks. Incorporation of the a priori knowledge ensures that the resultant network has behaviour similar to the system which it is modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atualmente, são geradas enormes quantidades de dados que, na maior parte das vezes, não são devidamente analisados. Como tal, existe um fosso cada vez mais significativo entre os dados existentes e a quantidade de dados que é realmente analisada. Esta situação verifica-se com grande frequência na área da saúde. De forma a combater este problema foram criadas técnicas que permitem efetuar uma análise de grandes massas de dados, retirando padrões e conhecimento intrínseco dos dados. A área da saúde é um exemplo de uma área que cria enormes quantidades de dados diariamente, mas que na maior parte das vezes não é retirado conhecimento proveitoso dos mesmos. Este novo conhecimento poderia ajudar os profissionais de saúde a obter resposta para vários problemas. Esta dissertação pretende apresentar todo o processo de descoberta de conhecimento: análise dos dados, preparação dos dados, escolha dos atributos e dos algoritmos, aplicação de técnicas de mineração de dados (classificação, segmentação e regras de associação), escolha dos algoritmos (C5.0, CHAID, Kohonen, TwoSteps, K-means, Apriori) e avaliação dos modelos criados. O projeto baseia-se na metodologia CRISP-DM e foi desenvolvido com a ferramenta Clementine 12.0. O principal intuito deste projeto é retirar padrões e perfis de dadores que possam vir a contrair determinadas doenças (anemia, doenças renais, hepatite, entre outras) ou quais as doenças ou valores anormais de componentes sanguíneos que podem ser comuns entre os dadores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO: A dor lombar crónica (DLC) é uma das condições clínicas mais comuns e com elevados custos socioeconómicos no mundo ocidental. Estudos recentes indicam que os utentes com DLC apresentam diferentes padrões de atividade que influenciam os níveis de incapacidade funcional. Contudo, a evidência acerca destas associações é, ainda, limitada e inconclusiva. Em Portugal, não existe, do nosso conhecimento, nenhuma escala validada para a população portuguesa que meça estes padrões de atividade em utentes com DLC. Objetivos: Adaptar culturalmente a escala Patterns of Activity Measure – Pain (POAM-P) para a população portuguesa com dor lombar crónica inespecífica (DLCI) e contribuir para a sua validação. Metodologia: A versão original (inglesa) do POAM-P foi traduzida e adaptada para a língua portuguesa (POAM-P-VP) através de uma equipa multidisciplinar que incluiu tradutores, retrotradutores (cegos e independentes), peritos de diferentes áreas e utentes com DLCI, de acordo com as recomendações de linhas orientadoras atuais para este processo. A análise factorial e das propriedades psicométricas da POAM-P-VP contou com uma amostra de 132 utentes. A consistência interna foi analisada através do coeficiente alpha de Cronbach (α) e para a análise da fiabilidade teste-reteste recorreu-se ao coeficiente de correlação intraclasse (ICC:2,1). A análise da validade de construto convergente e discriminativa das componentes da POAM-P-VP foi conseguida através da aplicação da versão portuguesa da escala Tampa Scale of Kinesiophobia (TSK-13-VP), e recorrendo ao cálculo do coeficiente de Spearman. Todos os cálculos estatísticos foram realizados no software IBM SPSS Statistics (versão 20). Resultados: A análise factorial permitiu identificar três componentes da POAM-P-VP (evitamento, persistência excessiva e persistência consistente com a dor), sendo estruturalmente diferentes das subescalas do POAM-P original. Estas componentes apresentaram uma consistência interna boa a elevada. As componentes 1 e 2 apresentaram uma fiabilidade teste-reteste moderada a excelente, e a componente 3 uma fiabilidade teste-reteste pobre, limitando o seu poder de uso na prática clínica e em investigação. Relativamente à validade de construto, nenhuma das hipóteses estabelecidas no estudo apriori foram verificadas, não podendo aferir acerca da relação dos padrões de atividade com a cinesiofobia, medida pelo TSK-13-VP. Porém, a componente de evitamento da POAM-P-VP parece medir conteúdos partilhados com a TSK-13-VP (rs = 0.15, p<0.048). Conclusão: A adaptação e contributo para a validação da versão portuguesa da escala POAM-P constituiu um ponto de partida para a existência de um instrumento de medição de padrões de atividade de utentes portugueses com DLC, requerendo mais estudos para a sua validação. Apesar de algumas limitações, considera-se que este estudo é de grande importância para os fisioterapeutas e investigadores que buscam um maior conhecimento e efetividade das abordagens de intervenção em utentes com dor lombar crónica.-------------- ABSTRACT: Chronic low back pain (CLBP) is one of the most common clinical conditions as well as one with high economical costs within western countries. Recent studies have shown that patients with LBP present different patterns of activity which influence their levels of functional capacity. However, evidence on these associations is still limited and inconclusive. To our knowledge, there is in Portugal no valid scale for measuring these patterns of activity in CLBP patients. Purpose: Culturally adapt the Patterns of Activity Measure – Pain (POAM-P) scale to the Portuguese population with non-specific chronic low back pain (NSLBP) and contribute to its validation. Method: The original English version of POAM-P was blindly and independently translated, back translated and adapted to the Portuguese language (POAM-P-VP) by a multidisciplinary team of translators, experts from different fields, and patients with NSLBP, according to established guidelines for this process. Factorial and psychometric properties’ analysis of POAM-P-VP comprised a sample of 132 patients. The internal consistency was analyzed based on Cronbach's alpha-coefficient (α) and for test-retest reliability analysis the Intraclass Correlation Coefficient (ICC) was used. The analysis of convergent and discriminant construct validity of POAM-P-VP components was achieved through the use of the Portuguese version of the Tampa Scale of Kinesiophobia (TSK-13-VP), using the Spearman coefficient calculation. All statistical calculations were performed using IBM SPSS Statistics software (v.20). Results: The factor analysis allowed for the identification of three components of POAM-P-VP (avoidance, excessive persistence and pain-contingent persistence), structurally different from the original POAM-P subscales. These components demonstrated a good to high level of internal consistency. Components 1 and 2 demonstrated moderate to excellent test-retest reliability, whereas component 3 presented low test-retest reliability thus limiting its clinical and investigative use. With regard to construct validity, none of the previously established hypothesis was verified, therefore not making it possible to assess the relation between activity patterns and kinesiophobia, measured by TSK-13-VP. However, the avoidance component of POAM-P-VP seems to share measurable contents with TSK-13-VP (rs = 0.15, p<0.048). Conclusion: The adaptation and contribution to the validation of the Portuguese version of POAM-P scale, sets a starting point to the existence of a useful instrument for measuring activity patterns in Portuguese CLBP patients, requiring further studies towards its validation. Despite some limitations, this study is considered of high importance to physiotherapists as well as investigators in search of deeper knowledge and effective practical approaches on chronic low back pain patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a method for brain atlas deformation inpresence of large space-occupying tumors, based on an apriori model of lesion growth that assumes radialexpansion of the lesion from its starting point. First,an affine registration brings the atlas and the patientinto global correspondence. Then, the seeding of asynthetic tumor into the brain atlas provides a templatefor the lesion. Finally, the seeded atlas is deformed,combining a method derived from optical flow principlesand a model of lesion growth (MLG). Results show that themethod can be applied to the automatic segmentation ofstructures and substructures in brains with grossdeformation, with important medical applications inneurosurgery, radiosurgery and radiotherapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo quiere esclarecer cómo y por qué, en la doctrina del «primer» Heidegger,la hermenéutica emerge como el complemento metodológico indispensable para el transcendentalismo de la fenomenología. Constata que la afinidad metodológica es el vínculo decisivo entre esta doctrina y la ontología fundamental, en contraste con una manifiesta disparidad temática: la conciencia, la intencionalidad y la reflexión son tres cruciales referenciasfenomenológicas que carecen de contrapartida fundamental-ontológica. Pero si Heidegger preserva la dimensión transcendental recogida de la fenomenología, tambiénimprime a su doctrina un carácter específicamente hermenéutico, patente en la transformación que recibe la noción capital de Auslegung. Hermenéutica y transcendentalismo, en efecto,no sólo no son antagónicos sino que estjn armonizados en el rnodus operandi de la ontología fundamental. En su indagación del a priori de toda constitución de sentido, tributaria de un antideductivismo tan exacerbado como el de la fenomenología, Heidegger introduce una dimensión metodológica inédita. Al fin y al cabo, la automostración del ser no ocupa el lugar teórico, supuestamente ametódico, que la fenomenología asigna a la in-mediatez.Entender esta mutación del método fenomenológico, desde luego, conlleva explorar en detalle cómo integró Heidegger las dispares componentes doctrinales de la ontología fundamental y por qué se empeñó en cuestionar el carácter neutral que se suele exigir al método.Transponiendo el transcendentalismo presencialista de Husserl en un proyecto ontológico, reinterpretó la metodología de la «intuitividad presentificadora» hasta hacerla compatible con una noción radicalmente ampliada de fenómeno. Así una indagación fenomenológica legítima ha de investigar transcendentalmente el «sentido del ser» como el apriori absoluto.La fenomenología ha de ser realizada como ontología.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The prevalence of coronary artery disease (CAD) is ever increasing in western industrialized societies. An individuals overall risk for CAD may be quantified by integrating a number of factors including, but not limited to, cardiorespiratory fitness, body composition, blood lipid profile and blood pressure. It might be expected that interventions aimed at improving any or all of these independent factors might improve an individual 's overall risk. To this end, the influence of standard endurance type exercise on cardiorespiratory fitness, body composition, blood lipids and blood pressure, and by extension the reduction of coronary risk factors, is well documented. On the other hand, interval training (IT) has been shown to provide an extremely powerful stimulus for improving indices of cardiorespiratory function but the influence of this training type on coronary risk factors is unknown. Moreover, the vast majority of studies investigating the effects of IT on fitness have used laboratory type training protocols. As a result of this, the influence of participation in interval-type recreational sports on cardiorespiratory fitness and coronary risk factors is unknown. Aims: The aim of the present study was to evaluate the effectiveness of recreational ball hockey, a sport associated with interval-type activity patterns, on indices of aerobic function and coronary risk factors in sedentary men in the approximate age range of 30 - 60 years. Individual risk factors were compiled into an overall coronary risk factor score using the Framingham Point Scale (FPS). Methods: Twenty-four sedentary males (age range 30 - 60) participated in the study. Subject activity level was assessed apriori using questionnaire responses. All subjects (experimental and control) were assessed to have been inactive and sedentary prior to participation in the study. The experimental group (43 ± 3 years; 90 ± 3 kg) (n = 11) participated in one season of recreational ball hockey (our surrogate for IT). Member of this group played a total of 16 games during an 11 week span. During this time, the control group (43 ± 2 years; 89 ± 2 kg) (n = 11) performed no training and continued with their sedentary lifestyle. Prior to and following the ball hockey season, experimental and control subjects were tested for the following variables: 1) cardiorespiratory fitness (as V02 Max) 2) blood lipid profile 3) body composition 5) waist to hip ratio 6) blood glucose levels and 7) blood pressure. Subject V02 Max was assessed using the Rockport submaximal walking test on an indoor track. To assess body composition we determined body mass ratio (BMI), % body fat, % lean body mass and waist to hip ratio. The blood lipid profile included high density lipoprotein, low density lipoprotein and total cholesterol levels; in addition, the ratio of total cholesterol to high density was calculated. Blood triglycerides were also assessed. All data were analyzed using independent t - tests and all data are expressed as mean ± standard error. Statistical significance was accepted at p :S 0.05. Results: Pre-test values for all variables were similar between the experimental and control group. Moreover, although the intervention used in this study was associated with changes in some variables for subjects in the experimental group, subjects in the control group did not exhibit any changes over the same time period. BODY COMPOSITION: The % body fat of experimental subjects decreased by 4.6 ± 0.5%, from 28.1 ± 2.6 to 26.9 ± 2.5 % while that of the control group was unchanged at 22.7 ± 1.4 and 22.2 ± 1.3 %. However, lean body mass of experimental and control subjects did not change at 64.3 ± 1.3 versus 66.1 ± 1.3 kg and 65.5 ± 0.8 versus 64.7 ± 0.8 kg, respectively. In terms of body mass index and waist to hip ratio, neither the experimental nor the control group showed any significant change. Respective values for the waist to hip ratio and body mass index (pre and post) were as follows: 1 ± 0.1 vs 0.9 ± 0.1 (experimental) and 0.9 ± 0.1 versus 0.9 ± 0.1 (controls) while for BMI they were 29 ± 1.4 versus 29 ± 1.2 (experimental) and 26 ± 0.7 vs. 26 ± 0.7 (controls). CARDIORESPIRATORY FITNESS: In the experimental group, predicted values for absolute V02 Max increased by 10 ± 3% (i.e. 3.3 ± 0.1 to 3.6 ± 0.1 liters min -1 while that of control subjects did not change (3.4 ± 0.2 and 3.4 ± 0.2 liters min-I). In terms of relative values for V02 Max, the experimental group increased by 11 ± 2% (37 ± 1.4 to 41 ± 1.4 ml kg-l min-I) while that of control subjects did not change (41 ± 1.4 and 40 ± 1.4 ml kg-l min-I). BLOOD LIPIDS: Compared to pre-test values, post-test values for HDL were decreased by 14 ± 5 % in the experiment group (from 52.4 ± 4.4 to 45.2 ± 4.3 mg dl-l) while HDL data for the control group was unchanged (49.7 ± 3.6 and 48.3 ± 4.1 mg dl-l, respectively. On the other hand, LDL levels did not change for either the experimental or control group (110.2 ± 10.4 versus 112.3 ± 7.1 mg dl-1 and 106.1 ± 11.3 versus 127 ± 15.1 mg dl-1, respectively). Further, total cholesterol did not change in either the experimental or control group (181.3 ± 8.7 mg dl-1 versus 178.7± 4.9 mg dl-l) and 190.7 ± 12.2 versus 197.1 ± 16.1 mg dl-1, respectively). Similarly, the ratio of TC/HDL did not change for either the experimental or control group (3.8 ± 0.4 versus 4.5 ± 0.5 and 4 ± 0.4 versus 4.2 ± 0.4, respectively). Blood triglyceride levels were also not altered in either the experimental or control group (100.3 ± 19.6 versus 114.8 ± 15.3 mg dl-1 and 140 ± 23.5 versus 137.3 ± 17.9 mg dl-l, respectively). BLOOD GLUCOSE: Fasted blood glucose levels did not change in either the experimental or control group. Pre- and post-values for experimental and control groups were 92.5 ± 4.8 versus 93.3 ± 4.3 mg dl-l and 92.3 ± 11.3 versus 93.2 ± 2.6 mg dl-1 , respectively. BLOOD PRESSURE: No aspect of blood pressure was altered in either the experimental or control group. For example, pre- and post-test systolic blood pressures were 131 ± 2 versus 129 ± 2 mmHg (experimental) and 123 ± 2 and 125 ± 2 mmHg (controls), respectively. Pre- and post-test diastolic blood pressures were 84 ± 2 and 83 ± 2 mmHg (experimental) and 81 ± 1 versus 82 ± 1 mmHg, respectively. Similarly, calculated pulse pressure was not altered in the experimental or control as pre- and post-test values were 47 ± 1 versus 47 ± 2 mmlHg and 42 ± 2 versus 43 ± 2 mmHg, respectively. FRAMINGHAM POINT SCORE: The concerted changes reported above produced an increased risk in the Framingham Point Score for the subjects in the experimental group. For example, the pre- and post-test FPS increased from 1.4 ± 0.9 to 2.7 ± 0.7. On the other hand, pre- and post-test scores for the control group were 1.8 ± 1 versus 1.8 ± 0.9. Conclusions: Our data confirms previous studies showing that interval-type exercise is a useful intervention for increasing aerobic fitness. Moreover, the increase in V02 Max we found in response to limited participation in ball hockey (i.e. 16 games) suggests that recreational sport may help reduce this aspect of coronary risk in previously sedentary individual. On the other hand, our results showing little or no positive change in body composition, blood lipids or blood pressures suggest that one season of recreational sport in not in of itself a powerful enough stimulus to reduce the overall risk of coronary artery disease. In light of this, it is recommended that, in addition to participation in recreational sport, the performance of regular physical activity is used as an adjunct to provide a more powerful overall stimulus for decreasing coronary risk factors. LIMITATIONS: The increase in the FPS we found for the experimental group, indicative of an increased risk for coronary disease, was largely due to the large decrease in HDL we observed after compared to above one season of ball hockey. In light of the fact that cardiorespiratory fitness was increased and % body fat was decreased, as well as the fact that other parameters such as blood pressure showed positive (but non statistically significant) trends, the possibility that the decrease in HDL showed by our data was anomalous should be considered. FUTURE DIRECTIONS: The results of this study suggesting that recreational sport may be a potentially useful intervention in the reduction of CAD require to be corroborated by future studies specifically employing 1) more rigorous assessment of fitness and fitness change and 2) more prolonged or frequent participants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cette thèse est composée de trois essais liés à la conception de mécanisme et aux enchères. Dans le premier essai j'étudie la conception de mécanismes bayésiens efficaces dans des environnements où les fonctions d'utilité des agents dépendent de l'alternative choisie même lorsque ceux-ci ne participent pas au mécanisme. En plus d'une règle d'attribution et d'une règle de paiement le planificateur peut proférer des menaces afin d'inciter les agents à participer au mécanisme et de maximiser son propre surplus; Le planificateur peut présumer du type d'un agent qui ne participe pas. Je prouve que la solution du problème de conception peut être trouvée par un choix max-min des types présumés et des menaces. J'applique ceci à la conception d'une enchère multiple efficace lorsque la possession du bien par un acheteur a des externalités négatives sur les autres acheteurs. Le deuxième essai considère la règle du juste retour employée par l'agence spatiale européenne (ESA). Elle assure à chaque état membre un retour proportionnel à sa contribution, sous forme de contrats attribués à des sociétés venant de cet état. La règle du juste retour est en conflit avec le principe de la libre concurrence puisque des contrats ne sont pas nécessairement attribués aux sociétés qui font les offres les plus basses. Ceci a soulevé des discussions sur l'utilisation de cette règle: les grands états ayant des programmes spatiaux nationaux forts, voient sa stricte utilisation comme un obstacle à la compétitivité et à la rentabilité. Apriori cette règle semble plus coûteuse à l'agence que les enchères traditionnelles. Nous prouvons au contraire qu'une implémentation appropriée de la règle du juste retour peut la rendre moins coûteuse que des enchères traditionnelles de libre concurrence. Nous considérons le cas de l'information complète où les niveaux de technologie des firmes sont de notoriété publique, et le cas de l'information incomplète où les sociétés observent en privée leurs coûts de production. Enfin, dans le troisième essai je dérive un mécanisme optimal d'appel d'offre dans un environnement où un acheteur d'articles hétérogènes fait face a de potentiels fournisseurs de différents groupes, et est contraint de choisir une liste de gagnants qui est compatible avec des quotas assignés aux différents groupes. La règle optimale d'attribution consiste à assigner des niveaux de priorité aux fournisseurs sur la base des coûts individuels qu'ils rapportent au décideur. La manière dont ces niveaux de priorité sont déterminés est subjective mais connue de tous avant le déroulement de l'appel d'offre. Les différents coûts rapportés induisent des scores pour chaque liste potentielle de gagnant. Les articles sont alors achetés à la liste ayant les meilleurs scores, s'il n'est pas plus grand que la valeur de l'acheteur. Je montre également qu'en général il n'est pas optimal d'acheter les articles par des enchères séparées.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the current study, epidemiology study is done by means of literature survey in groups identified to be at higher potential for DDIs as well as in other cases to explore patterns of DDIs and the factors affecting them. The structure of the FDA Adverse Event Reporting System (FAERS) database is studied and analyzed in detail to identify issues and challenges in data mining the drug-drug interactions. The necessary pre-processing algorithms are developed based on the analysis and the Apriori algorithm is modified to suit the process. Finally, the modules are integrated into a tool to identify DDIs. The results are compared using standard drug interaction database for validation. 31% of the associations obtained were identified to be new and the match with existing interactions was 69%. This match clearly indicates the validity of the methodology and its applicability to similar databases. Formulation of the results using the generic names expanded the relevance of the results to a global scale. The global applicability helps the health care professionals worldwide to observe caution during various stages of drug administration thus considerably enhancing pharmacovigilance

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents Bayes invariant quadratic unbiased estimator, for short BAIQUE. Bayesian approach is used here to estimate the covariance functions of the regionalized variables which appear in the spatial covariance structure in mixed linear model. Firstly a brief review of spatial process, variance covariance components structure and Bayesian inference is given, since this project deals with these concepts. Then the linear equations model corresponding to BAIQUE in the general case is formulated. That Bayes estimator of variance components with too many unknown parameters is complicated to be solved analytically. Hence, in order to facilitate the handling with this system, BAIQUE of spatial covariance model with two parameters is considered. Bayesian estimation arises as a solution of a linear equations system which requires the linearity of the covariance functions in the parameters. Here the availability of prior information on the parameters is assumed. This information includes apriori distribution functions which enable to find the first and the second moments matrix. The Bayesian estimation suggested here depends only on the second moment of the prior distribution. The estimation appears as a quadratic form y'Ay , where y is the vector of filtered data observations. This quadratic estimator is used to estimate the linear function of unknown variance components. The matrix A of BAIQUE plays an important role. If such a symmetrical matrix exists, then Bayes risk becomes minimal and the unbiasedness conditions are fulfilled. Therefore, the symmetry of this matrix is elaborated in this work. Through dealing with the infinite series of matrices, a representation of the matrix A is obtained which shows the symmetry of A. In this context, the largest singular value of the decomposed matrix of the infinite series is considered to deal with the convergence condition and also it is connected with Gerschgorin Discs and Poincare theorem. Then the BAIQUE model for some experimental designs is computed and compared. The comparison deals with different aspects, such as the influence of the position of the design points in a fixed interval. The designs that are considered are those with their points distributed in the interval [0, 1]. These experimental structures are compared with respect to the Bayes risk and norms of the matrices corresponding to distances, covariance structures and matrices which have to satisfy the convergence condition. Also different types of the regression functions and distance measurements are handled. The influence of scaling on the design points is studied, moreover, the influence of the covariance structure on the best design is investigated and different covariance structures are considered. Finally, BAIQUE is applied for real data. The corresponding outcomes are compared with the results of other methods for the same data. Thereby, the special BAIQUE, which estimates the general variance of the data, achieves a very close result to the classical empirical variance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo recopila literatura académica relevante sobre estrategias de entrada y metodologías para la toma de decisión sobre la contratación de servicios de Outsourcing para el caso de empresas que planean expandirse hacia mercados extranjeros. La manera en que una empresa planifica su entrada a un mercado extranjero, y realiza la consideración y evaluación de información relevante y el diseño de la estrategia, determina el éxito o no de la misma. De otro lado, las metodologías consideradas se concentran en el nivel estratégico de la pirámide organizacional. Se parte de métodos simples para llegar a aquellos basados en la Teoría de Decisión Multicriterio, tanto individuales como híbridos. Finalmente, se presenta la Dinámica de Sistemas como herramienta valiosa en el proceso, por cuanto puede combinarse con métodos multicriterio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a look is taken at how the use of implant technology can be used to either increase the range of the abilities of a human and/or diminish the effects of a neural illness, such as Parkinson's Disease. The key element is the need for a clear interface linking the human brain directly with a computer. The area of interest here is the use of implant technology, particularly where a connection is made between technology and the human brain and/or nervous system. Pilot tests and experimentation are invariably carried out apriori to investigate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies are discussed here. The paper goes on to describe human experimentation, in particular that carried out by the author himself, which led to him receiving a neural implant which linked his nervous system bi-directionally with the internet. With this in place neural signals were transmitted to various technological devices to directly control them. In particular, feedback to the brain was obtained from the fingertips of a robot hand and ultrasonic (extra) sensory input. A view is taken as to the prospects for the future, both in the near term as a therapeutic device and in the long term as a form of enhancement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interface between humans and technology is a rapidly changing field. In particular as technological methods have improved dramatically so interaction has become possible that could only be speculated about even a decade earlier. This interaction can though take on a wide range of forms. Indeed standard buttons and dials with televisual feedback are perhaps a common example. But now virtual reality systems, wearable computers and most of all, implant technology are throwing up a completely new concept, namely a symbiosis of human and machine. No longer is it sensible simply to consider how a human interacts with a machine, but rather how the human-machine symbiotic combination interacts with the outside world. In this paper we take a look at some of the recent approaches, putting implant technology in context. We also consider some specific practical examples which may well alter the way we look at this symbiosis in the future. The main area of interest as far as symbiotic studies are concerned is clearly the use of implant technology, particularly where a connection is made between technology and the human brain and/or nervous system. Often pilot tests and experimentation has been carried out apriori to investigate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies are discussed briefly here. The paper however concentrates on human experimentation, in particular that carried out by the authors themselves, firstly to indicate what possibilities exist as of now with available technology, but perhaps more importantly to also show what might be possible with such technology in the future and how this may well have extensive social effects. The driving force behind the integration of technology with humans on a neural level has historically been to restore lost functionality in individuals who have suffered neurological trauma such as spinal cord damage, or who suffer from a debilitating disease such as lateral amyotrophic sclerosis. Very few would argue against the development of implants to enable such people to control their environment, or some aspect of their own body functions. Indeed this technology in the short term has applications for amelioration of symptoms for the physically impaired, such as alternative senses being bestowed on a blind or deaf individual. However the issue becomes distinctly more complex when it is proposed that such technology be used on those with no medical need, but instead who wish to enhance and augment their own bodies, particularly in terms of their mental attributes. These issues are discussed here in the light of practical experimental test results and their ethical consequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questlon of the crowding-out of private !nvestment by public expenditure, public investment in particular , ln the Brazilian economy has been discussed more in ideological terrns than on empirical grounds. The present paper tries to avoid the limitation of previous studies by estlmatlng an equation for private investment whlch makes it possible to evaluate the effect of economic policies on prlvate investment. The private lnvestment equation was deduced modifylng the optimal flexible accelerator medel (OFAM) incorporating some channels through which public expendlture influences privateinvestment. The OFAM consists in adding adjustment costs to the neoclassical theory of investrnent. The investment fuction deduced is quite general and has the following explanatory variables: relative prices (user cost of capitaljimput prices ratios), real interest rates, real product, public expenditures and lagged private stock of capital. The model was estimated for private manufacturing industry data. The procedure adopted in estimating the model was to begin with a model as general as possible and apply restrictions to the model ' s parameters and test their statistical significance. A complete diagnostic testing was also made in order to test the stability of estirnated equations. This procedure avoids ' the shortcomings of estimating a model with a apriori restrictions on its parameters , which may lead to model misspecification. The main findings of the present study were: the increase in public expenditure, at least in the long run, has in general a positive expectation effect on private investment greater than its crowding-out effect on priva te investment owing to the simultaneous rise in interst rates; a change in economlc policy, such as that one of Geisel administration, may have an important effect on private lnvestment; and reI ative prices are relevant in determining the leveI of desired stock of capital and private investrnent.