886 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Aim: To report a possible case of tremor fluoxetine-induced treated as Parkinson’s disease in an elderly female patient noncompliant with the pharmacotherapy, with uncontrolled hypertension and using fluoxetine to treat depression. Presentation of Case: Patient complained of sleepiness in the morning, agitation, anxiety, insomnia and mental confusion. Her greatest concern was about bilateral hand tremors which, in her view became, worse after biperiden was prescribed. Therefore, she stopped taking it. The initial medication was: omeprazole, losartan, biperiden, fluoxetine, atenolol + chlorthalidone, acetylsalicylic acid, atorvastatin and diazepam. Pharmacotherapeutic follow up was performed in order to check the necessity, safety and effectiveness of treatment. Discussion: During the analysis of pharmacotherapy, the patient showed uncontrolled blood pressure and had difficulty complying with the treatment. Thus, in view of the complaints expressed by the patient, our first hypothesis was a possible serotonin syndrome related to fluoxetine use. We proposed a change in the fluoxetine regime and discontinuation of biperiden. As tremors persisted, we suggested the replacement of fluoxetine by sertraline, since a possible tremor fluoxetine-induced could explain the complaint. This approach solved the drug-related problem identified. Conclusion: Tremors reported by the patient was identified as an iatrogenic event related to fluoxetine, which was solved by management of serotonin-reuptake inhibitors.
Resumo:
In this action research study of my classroom of 8th grade mathematics students, I investigated if learning different problem solving strategies helped students successfully solve problems. I also investigated if students’ knowledge of the topics involved in story problems had an impact on students’ success rates. I discovered that students were more successful after learning different problem solving strategies and when given problems with which they have experience. I also discovered that students put forth a greater effort when they approach the story problem like a game, instead of just being another math problem that they have to solve. An unexpected result was that the students’ degree of effort had a major impact on their success rate. As a result of this research, I plan to continue to focus on problem solving strategies in my classes. I also plan to improve my methods on getting students’ full effort in class.
Resumo:
Objectives: The Brazilian public health system does not provide electroconvulsive therapy (ECT), which is limited to a few academic services. National mental health policies are against ECT. Our objectives were to analyze critically the public policies toward ECT and present the current situation using statistics from the Institute of Psychiatry of the University of Sao Paulo (IPq-HCFMUSP) and summary data from the other 13 ECT services identified in the country. Methods: Data regarding ECT treatment at the IPq-HCFMUSP were collected from January 2009 to June 2010 (demographical, number of sessions, and diagnoses). All the data were analyzed using SPSS 19, Epic Info 2000, and Excel. Results: During this period, 331 patients were treated at IPq-HCFMUSP: 221 (67%) were from Sao Paulo city, 50 (15.2%) from Sao Paulo's metropolitan area, 39 (11.8%) from Sao Paulo's countryside, and 20 (6.1%) from other states; 7352 ECT treatments were delivered-63.0% (4629) devoted entirely via the public health system (although not funded by the federal government); the main diagnoses were a mood disorder in 86.4% and schizophrenia in 7.3% of the cases. Conclusions: There is an important lack of public assistance for ECT, affecting mainly the poor and severely ill patients. The university services are overcrowded and cannot handle all the referrals. The authors press for changes in the mental health policies.
Resumo:
Context. Detections of molecular lines, mainly from H-2 and CO, reveal molecular material in planetary nebulae. Observations of a variety of molecules suggest that the molecular composition in these objects differs from that found in interstellar clouds or in circumstellar envelopes. The success of the models, which are mostly devoted to explain molecular densities in specific planetary nebulae, is still partial however. Aims. The present study aims at identifying the influence of stellar and nebular properties on the molecular composition of planetary nebulae by means of chemical models. A comparison of theoretical results with those derived from the observations may provide clues to the conditions that favor the presence of a particular molecule. Methods. A self-consistent photoionization numerical code was adapted to simulate cold molecular regions beyond the ionized zone. The code was used to obtain a grid of models and the resulting column densities are compared with those inferred from observations. Results. Our models show that the inclusion of an incident flux of X-rays is required to explain the molecular composition derived for planetary nebulae. We also obtain a more accurate relation for the N(CO)/N(H-2) ratio in these objects. Molecular masses obtained by previous works in the literature were then recalculated, showing that these masses can be underestimated by up to three orders of magnitude. We conclude that the problem of the missing mass in planetary nebulae can be solved by a more accurate calculation of the molecular mass.
Resumo:
Introduction: This research project examined influence of the doctors' speciality on primary health care (PHC) problem solving in Belo Horizonte (BH) Brazil, comparing homeopathic with family health doctors (FH), from the management's and the patients' viewpoint. In BH, both FH and homeopathic doctors work in PHC. The index of resolvability (IR) is used to compare resolution of problems by doctors. Methods: The present research compared IR, using official data from the Secretariat of Health and test requests made by the doctors and 482 structured interviews with patients. A total of 217,963 consultations by 14 homeopaths and 67 FH doctors between 1 July 2006 and 30 June 2007 were analysed. Results: The results show significant differences greater problem resolution by homeopaths compared to FH doctors. Conclusion: In BH, the medical speciality, homeopathy or FH, has an impact on problem solving, both from the managers' and the patients' point of view. Homeopaths request fewer tests and have better IR compared with FH doctors. Specialisation in homeopathy is an independent positive factor in problem solving at PHC level in BH, Brazil. Homeopathy (2012) 101, 44-50.
Resumo:
Support Vector Machines (SVMs) have achieved very good performance on different learning problems. However, the success of SVMs depends on the adequate choice of the values of a number of parameters (e.g., the kernel and regularization parameters). In the current work, we propose the combination of meta-learning and search algorithms to deal with the problem of SVM parameter selection. In this combination, given a new problem to be solved, meta-learning is employed to recommend SVM parameter values based on parameter configurations that have been successfully adopted in previous similar problems. The parameter values returned by meta-learning are then used as initial search points by a search technique, which will further explore the parameter space. In this proposal, we envisioned that the initial solutions provided by meta-learning are located in good regions of the search space (i.e. they are closer to optimum solutions). Hence, the search algorithm would need to evaluate a lower number of candidate solutions when looking for an adequate solution. In this work, we investigate the combination of meta-learning with two search algorithms: Particle Swarm Optimization and Tabu Search. The implemented hybrid algorithms were used to select the values of two SVM parameters in the regression domain. These combinations were compared with the use of the search algorithms without meta-learning. The experimental results on a set of 40 regression problems showed that, on average, the proposed hybrid methods obtained lower error rates when compared to their components applied in isolation.
Resumo:
[EN] In this paper we present a method for the regularization of 3D cylindrical surfaces. By a cylindrical surface we mean a 3D surface that can be expressed as an application S(l; µ) ! R3 , where (l; µ) represents a cylindrical parametrization of the 3D surface. We built an initial cylindrical parametrization of the surface. We propose a new method to regularize such cylindrical surface. This method takes into account the information supplied by the disparity maps computed between pair of images to constraint the regularization of the set of 3D points. We propose a model based on an energy which is composed of two terms: an attachment term that minimizes the difference between the image coordinates and the disparity maps and a second term that enables a regularization by means of anisotropic diffusion. One interesting advantage of this approach is that we regularize the 3D surface by using a bi-dimensional minimization problem.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).
The gaseous environment of radio galaxies: a new perspective from high-resolution x-ray spectroscopy
Resumo:
It is known that massive black holes have a profound effect on the evolution of galaxies, and possibly on their formation by regulating the amount of gas available for the star formation. However, how black hole and galaxies communicate is still an open problem, depending on how much of the energy released interacts with the circumnuclear matter. In the last years, most studies of feedback have primarily focused on AGN jet/cavity systems in the most massive galaxy clusters. This thesis intends to investigate the feedback phenomena in radio--loud AGNs from a different perspective studying isolated radio galaxies, through high-resolution spectroscopy. In particular one NLRG and three BLRG are studied, searching for warm gas, both in emission and absorption, in the soft X-ray band. I show that the soft spectrum of 3C33 originates from gas photoionized by the central engine. I found for the first time WA in 3C382 and 3C390.3. I show that the observed warm emitter/absorbers is not uniform and probably located in the NLR. The detected WA is slow implying a mass outflow rate and kinetic luminosity always well below 1% the L(acc) as well as the P(jet). Finally the radio--loud properties are compared with those of type 1 RQ AGNs. A positive correlation is found between the mass outflow rate/kinetic luminosity, and the radio loudness. This seems to suggest that the presence of a radio source (the jet?) affects the distribution of the absorbing gas. Alternatively, if the gas distribution is similar in Seyferts and radio galaxies, the M(out) vs rl relation could simply indicate a major ejection of matter in the form of wind in powerful radio AGNs.
Resumo:
In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.
Resumo:
Sowohl in der Natur als auch in der Industrie existieren thermisch induzierte Strömungen. Von Interesse für diese Forschungsarbeit sind dabei die Konvektionen im Erdmantel sowie in den Glasschmelzwannen. Der dort stattfindende Materialtransport resultiert aus Unterschieden in der Dichte, der Temperatur und der chemischen Konzentration innerhalb des konvektierenden Materials. Um das Verständnis für die ablaufenden Prozesse zu verbessern, werden von zahlreichen Forschergruppen numerische Modellierungen durchgeführt. Die Verifikation der dafür verwendeten Algorithmen erfolgt meist über die Analyse von Laborexperimenten. Im Vordergrund dieser Forschungsarbeit steht die Entwicklung einer Methode zur Bestimmung der dreidimensionalen Temperaturverteilung für die Untersuchung von thermisch induzierten Strömungen in einem Versuchsbecken. Eine direkte Temperaturmessung im Inneren des Versuchsmaterials bzw. der Glasschmelze beeinflusst allerdings das Strömungsverhalten. Deshalb wird die geodynamisch störungsfrei arbeitende Impedanztomographie verwendet. Die Grundlage dieser Methode bildet der erweiterte Arrhenius-Zusammenhang zwischen Temperatur und spezifischer elektrischer Leitfähigkeit. Während der Laborexperimente wird ein zähflüssiges Polyethylenglykol-Wasser-Gemisch in einem Becken von unten her erhitzt. Die auf diese Weise generierten Strömungen stellen unter Berücksichtigung der Skalierung ein Analogon sowohl zu dem Erdmantel als auch zu den Schmelzwannen dar. Über mehrere Elektroden, die an den Beckenwänden installiert sind, erfolgen die geoelektrischen Messungen. Nach der sich anschließenden dreidimensionalen Inversion der elektrischen Widerstände liegt das Modell mit der Verteilung der spezifischen elektrischen Leitfähigkeit im Inneren des Versuchsbeckens vor. Diese wird mittels der erweiterten Arrhenius-Formel in eine Temperaturverteilung umgerechnet. Zum Nachweis der Eignung dieser Methode für die nichtinvasive Bestimmung der dreidimensionalen Temperaturverteilung wurden mittels mehrerer Thermoelemente an den Beckenwänden zusätzlich direkte Temperaturmessungen durchgeführt und die Werte miteinander verglichen. Im Wesentlichen sind die Innentemperaturen gut rekonstruierbar, wobei die erreichte Messgenauigkeit von der räumlichen und zeitlichen Auflösung der Gleichstromgeoelektrik abhängt.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
This thesis collects the outcomes of a Ph.D. course in Telecommunications Engineering and it is focused on the study and design of possible techniques able to counteract interference signal in Global Navigation Satellite System (GNSS) systems. The subject is the jamming threat in navigation systems, that has become a very increasingly important topic in recent years, due to the wide diffusion of GNSS-based civil applications. Detection and mitigation techniques are developed in order to fight out jamming signals, tested in different scenarios and including sophisticated signals. The thesis is organized in two main parts, which deal with management of GNSS intentional counterfeit signals. The first part deals with the interference management, focusing on the intentional interfering signal. In particular, a technique for the detection and localization of the interfering signal level in the GNSS bands in frequency domain has been proposed. In addition, an effective mitigation technique which exploits the periodic characteristics of the common jamming signals reducing interfering effects at the receiver side has been introduced. Moreover, this technique has been also tested in a different and more complicated scenario resulting still effective in mitigation and cancellation of the interfering signal, without high complexity. The second part still deals with the problem of interference management, but regarding with more sophisticated signal. The attention is focused on the detection of spoofing signal, which is the most complex among the jamming signal types. Due to this highly difficulty in detect and mitigate this kind of signal, spoofing threat is considered the most dangerous. In this work, a possible techniques able to detect this sophisticated signal has been proposed, observing and exploiting jointly the outputs of several operational block measurements of the GNSS receiver operating chain.