976 resultados para Algorithmic Probability


Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO: O presente estudo teve como objectivo avaliar a efectividade de um programa de intervenção de fisioterapia comparativamente ao tratamento conservador (calor húmido, ultra-som e massagem), relativamente à dor e capacidade funcional, no utente idoso com doença osteoarticular do joelho. A amostra foi aleatória, tendo sido seleccionados 20 utentes que respeitaram os critérios de inclusão, e que foram distribuídos aleatoriamente pelos dois grupos de tratamento, 9 no grupo A (experimental) e 11 no grupo B (controle). Todos os utentes deram o seu consentimento informado. Trata-se de um estudo experimental, controlado aleatorizado (RCT). A intervenção em estudo consistiu em 15 sessões de tratamento individuais, efectuadas 3 vezes por semana. O programa terapêutico efectuado pelo grupo A incluiu o tratamento conservador (20 minutos calor húmido, 5 minutos ultra-som (contínuo; 1,5W/cm2) e aproximadamente 10 minutos de massagem local) e o protocolo de exercícios terapêuticos em estudo. Este protocolo de exercícios foi progredindo semanalmente em termos da sua intensidade. Os utentes do Grupo B efectuaram apenas o tratamento conservador (tal como no grupo A). Para avaliar a dor e a capacidade funcional foi utilizado o Questionário knee Injury and Osteoarthritis Outcome Score. A análise dos resultados foi realizada através dos testes Mann-Whitney e Kruskal-Wallis para a comparação entre grupos. Os resultados sugerem não haver diferenças estatisticamente significativos entre os grupos, embora o grupo de controle tenha obtido melhores resultados. O grupo B apresentou uma diminuição da dor de 17,33, comparativamente aos valores de -3,00 no grupo A (p=0,101), e melhoria da capacidade funcional de 13,00, mantendo-se a capacidade funcional igual, no grupo A (0,00) p=0,080). Estes resultados parecem sugerir que não há diferenças significativas entre as duas modalidades de intervenção, realçando a necessidade de continuar a investigar este protocolo de exercícios e a sua efectividade.-----------------ABSTRACT: The aims of this study was to evaluate the effectiveness of a treatment program compared with conventional treatment (post hoots, ultrasound and massage), for the outcomes pain and functional ability in elderly with knee osteoarthritis. The sample was non-probability, and 20 patients have been selected that fulfilled the criteria for inclusion and who were randomly assigned to the two treatment groups, in group A and 11 in group B. All of the patients gave their informed consent. This is an experimental, randomized controlled trial (RCT) with blinded assessment, of comparative design. This study protocol program was carry out in 15 individual treatment sessions, 3x per week. The therapeutic program made by group A consisted of the performance of conservative treatment: 20 minutes of hot packs, 5 minutes of ultrasound (continuous, 1.5 W/cm2) and 10 minutes of massage plus the exercise protocol therapy consisted of: isometric exercises of quadriceps contractions, muscle strengthening for knee and aerobic training. This exercise protocol was progressing every week in terms of its intensity. The users in Group B, only made the conservative treatment (such as in group A). In this study there were evaluated the pain and functional capacity, assessed by questionnaire knee Injury Osteoarthritis Outcome Score. For comparison between groups were used Mann-Whitney and Kruskal-Wallis tests. The results revealed that in group B was that it obtained better results, although they are not statistical significance. The group B show a decrease in pain of 17.33 compared to -3.00 in group A (p = 0.101), and improved capacity functional of 13.00, keeping in group A (0.00) (p = 0.080). However, the differences are not statistically significant. These results show that there are not statistically significant in both treatments, but more studies are needed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apesar de haver diferenças de indivíduo para indivíduo, todas as pessoas se encontram num processo de envelhecimento, que se pretende que seja ativo e saudável. Porém, à medida que nos aproximamos da velhice, e por inúmeros fatores, a probabilidade da pessoa se sentir só, de ter sentimentos como a solidão, de estar isolada ou socialmente excluída, aumenta. Este relatório surge de um projeto desenhado e desenvolvido com pessoas que se sentiam sozinhas, sendo pessoas idosas ou com deficiência, e que, por esse motivo, foram encaminhadas para apoio domiciliário. Pretendeu-se assim colmatar algumas das necessidades quotidianas dessas pessoas, sendo relevante entender se esse apoio contribuiu para melhorar a qualidade de vida, evitar ou diminuir os momentos e sentimentos de solidão, através da (re)ativação de redes de apoio. O projeto “Desatando os nós da solidão com laços de afeto” foi, na medida das suas possibilidades, desenhado e desenvolvido com os participantes. Este projeto ramificou-se em três subprojetos, correspondendo cada um, de forma mais específica, às necessidades e aos problemas que cada pessoa sentia, tentando contribuir para a melhoria da qualidade de vida, recorrendo às potencialidades de cada um e aos recursos existentes na comunidade. É um projeto que terá continuidade, pois envolveu voluntários e técnicos da instituição e pessoas das redes de apoio mais próximas dos participantes, mas que já proporcionou algumas mudanças, sendo esse um ponto de partida para alcançar a utopia orientadora do projeto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a Stackelberg duopoly competition with differentiated goods and with unknown costs. The firms' aim is to choose the output levels of their products according to the well-known concept of perfect Bayesian equilibrium. There is a firm ( F1 ) that chooses first the quantity 1 q of its good; the other firm ( F2 ) observes 1 q and then chooses the quantity 2 q of its good. We suppose that each firm has two different technologies, and uses one of them following a probability distribution. The use of either one or the other technology affects the unitary production cost. We show that there is exactly one perfect Bayesian equilibrium for this game. We analyse the advantages, for firms and for consumers, of using the technology with the highest production cost versus the one with the cheapest cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IEEE 802.11 is one of the most well-established and widely used standard for wireless LAN. Its Medium Access control (MAC) layer assumes that the devices adhere to the standard’s rules and timers to assure fair access and sharing of the medium. However, wireless cards driver flexibility and configurability make it possible for selfish misbehaving nodes to take advantages over the other well-behaving nodes. The existence of selfish nodes degrades the QoS for the other devices in the network and may increase their energy consumption. In this paper we propose a green solution for selfish misbehavior detection in IEEE 802.11-based wireless networks. The proposed scheme works in two phases: Global phase which detects whether the network contains selfish nodes or not, and Local phase which identifies which node or nodes within the network are selfish. Usually, the network must be frequently examined for selfish nodes during its operation since any node may act selfishly. Our solution is green in the sense that it saves the network resources as it avoids wasting the nodes energy by examining all the individual nodes of being selfish when it is not necessary. The proposed detection algorithm is evaluated using extensive OPNET simulations. The results show that the Global network metric clearly indicates the existence of a selfish node while the Local nodes metric successfully identified the selfish node(s). We also provide mathematical analysis for the selfish misbehaving and derived formulas for the successful channel access probability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work shows that the synthesis of protein plastic antibodies tailored with selected charged monomersaround the binding site enhances protein binding. These charged receptor sites are placed over a neutralpolymeric matrix, thus inducing a suitable orientation the protein reception to its site. This is confirmed bypreparing control materials with neutral monomers and also with non-imprinted template. This concepthas been applied here to Prostate Specific Antigen (PSA), the protein of choice for screening prostate can-cer throughout the population, with serum levels >10 ng/mL pointing out a high probability of associatedcancer.Protein Imprinted Materials with charged binding sites (C/PIM) have been produced by surfaceimprinting over graphene layers to which the protein was first covalently attached. Vinylben-zyl(trimethylammonium chloride) and vinyl benzoate were introduced as charged monomers labellingthe binding site and were allowed to self-organize around the protein. The subsequent polymerizationwas made by radical polymerization of vinylbenzene. Neutral PIM (N/PIM) prepared without orientedcharges and non imprinted materials (NIM) obtained without template were used as controls.These materials were used to develop simple and inexpensive potentiometric sensor for PSA. Theywere included as ionophores in plasticized PVC membranes, and tested over electrodes of solid or liq-uid conductive contacts, made of conductive carbon over a syringe or of inner reference solution overmicropipette tips. The electrodes with charged monomers showed a more stable and sensitive response,with an average slope of -44.2 mV/decade and a detection limit of 5.8 × 10−11mol/L (2 ng/mL). The cor-responding non-imprinted sensors showed lower sensitivity, with average slopes of -24.8 mV/decade.The best sensors were successfully applied to the analysis of serum, with recoveries ranging from 96.9to 106.1% and relative errors of 6.8%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prostate Specific Antigen (PSA) is the biomarker of choice for screening prostate cancer throughout the population, with PSA values above 10 ng/mL pointing out a high probability of associated cancer1. According to the most recent World Health Organization (WHO) data, prostate cancer is the commonest form of cancer in men in Europe2. Early detection of prostate cancer is thus very important and is currently made by screening PSA in men over 45 years old, combined with other alterations in serum and urine parameters. PSA is a glycoprotein with a molecular mass of approximately 32 kDa consisting of one polypeptide chain, which is produced by the secretory epithelium of human prostate. Currently, the standard methods available for PSA screening are immunoassays like Enzyme-Linked Immunoabsorbent Assay (ELISA). These methods are highly sensitive and specific for the detection of PSA, but they require expensive laboratory facilities and high qualify personal resources. Other highly sensitive and specific methods for the detection of PSA have also become available and are in its majority immunobiosensors1,3-5, relying on antibodies. Less expensive methods producing quicker responses are thus needed, which may be achieved by synthesizing artificial antibodies by means of molecular imprinting techniques. These should also be coupled to simple and low cost devices, such as those of the potentiometric kind, one approach that has been proven successful6. Potentiometric sensors offer the advantage of selectivity and portability for use in point-of-care and have been widely recognized as potential analytical tools in this field. The inherent method is simple, precise, accurate and inexpensive regarding reagent consumption and equipment involved. Thus, this work proposes a new plastic antibody for PSA, designed over the surface of graphene layers extracted from graphite. Charged monomers were used to enable an oriented tailoring of the PSA rebinding sites. Uncharged monomers were used as control. These materials were used as ionophores in conventional solid-contact graphite electrodes. The obtained results showed that the imprinted materials displayed a selective response to PSA. The electrodes with charged monomers showed a more stable and sensitive response, with an average slope of -44.2 mV/decade and a detection limit of 5.8X10-11 mol/L (2 ng/mL). The corresponding non-imprinted sensors showed smaller sensitivity, with average slopes of -24.8 mV/decade. The best sensors were successfully applied to the analysis of serum samples, with percentage recoveries of 106.5% and relatives errors of 6.5%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper formulates a novel expression for entropy inspired in the properties of Fractional Calculus. The characteristics of the generalized fractional entropy are tested both in standard probability distributions and real world data series. The results reveal that tuning the fractional order allow an high sensitivity to the signal evolution, which is useful in describing the dynamics of complex systems. The concepts are also extended to relative distances and tested with several sets of data, confirming the goodness of the generalization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies several topics related with the concept of “fractional” that are not directly related with Fractional Calculus, but can help the reader in pursuit new research directions. We introduce the concept of non-integer positional number systems, fractional sums, fractional powers of a square matrix, tolerant computing and FracSets, negative probabilities, fractional delay discrete-time linear systems, and fractional Fourier transform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is a contribution to the definition and assessment of structural robustness. Special emphasis is given to reliability of reinforced concrete structures under corrosion of longitudinal reinforcement. On this communication several authors’ proposals in order to define and measure structural robustness are analyzed and discussed. The probabilistic based robustness index is defined, considering the reliability index decreasing for all possible damage levels. Damage is considered as the corrosion level of the longitudinal reinforcement in terms of rebar weight loss. Damage produces changes in both cross sectional area of rebar and bond strength. The proposed methodology is illustrated by means of an application example. In order to consider the impact of reinforcement corrosion on failure probability growth, an advanced methodology based on the strong discontinuities approach and an isotropic continuum damage model for concrete is adopted. The methodology consist on a two-step analysis: on the first step an analysis of the cross section is performed in order to capture phenomena such as expansion of the reinforcement due to the corrosion products accumulation and damage and cracking in the reinforcement surrounding concrete; on the second step a 2D deteriorated structural model is built with the results obtained on the first step of the analysis. The referred methodology combined with a Monte Carlo simulation is then used to compute the failure probability and the reliability index of the structure for different corrosion levels. Finally, structural robustness is assessed using the proposed probabilistic index.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Surgery for congenital heart disease (CHD) has changed considerably during the last three decades. The results of primary repair have steadily improved, to allow treating almost all patients within the pediatric age; nonetheless an increasing population of adult patients requires surgical treatment. The objective of this study is to present the early surgical results of patients who require surgery for CHD in the adult population within a multicentered European study population. METHODS: Data relative to the hospital course of 2,012 adult patients (age > or = 18 years) who required surgical treatment for CHD from January 1, 1997 through December 31, 2004 were reviewed. Nineteen cardiothoracic centers from 13 European countries contributed to the data collection. RESULTS: Mean age at surgery was 34.4 +/- 14.53 years. Most of the operations were corrective procedures (1,509 patients, 75%), followed by reoperations (464 patients, 23.1%) and palliative procedures (39 patients, 1.9%). Six hundred forty-nine patients (32.2%) required surgical closure of an isolated ostium secundum atrial septal defect. Overall hospital mortality was 2%. Preoperative cyanosis, arrhythmias, and NYHA class III-IV, proved significant risk factors for hospital mortality. Follow-up data were available in 1,342 of 1,972 patients (68%) who were discharged home. Late deaths occurred in 6 patients (0.5%). Overall survival probability was 97% at 60 months, which is higher for corrective procedures (98.2%) if compared with reoperations (94.1%) and palliations (86.1%). CONCLUSIONS: Surgical treatment of CHD in adult patients, in specialized cardiac units, proved quite safe, beneficial, and low-risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods: A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score >= 8 in men and >= 5 in women. Results: 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions: The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presented at 23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France.